CN112333418A - Method and device for determining intelligent unlocking mode, intelligent doorbell and storage medium - Google Patents

Method and device for determining intelligent unlocking mode, intelligent doorbell and storage medium Download PDF

Info

Publication number
CN112333418A
CN112333418A CN202010371535.5A CN202010371535A CN112333418A CN 112333418 A CN112333418 A CN 112333418A CN 202010371535 A CN202010371535 A CN 202010371535A CN 112333418 A CN112333418 A CN 112333418A
Authority
CN
China
Prior art keywords
characteristic value
intelligent
value
identified
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010371535.5A
Other languages
Chinese (zh)
Other versions
CN112333418B (en
Inventor
王云华
王银华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL New Technology Co Ltd
Original Assignee
Shenzhen TCL New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL New Technology Co Ltd filed Critical Shenzhen TCL New Technology Co Ltd
Priority to CN202010371535.5A priority Critical patent/CN112333418B/en
Publication of CN112333418A publication Critical patent/CN112333418A/en
Application granted granted Critical
Publication of CN112333418B publication Critical patent/CN112333418B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/186Video door telephones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B3/00Audible signalling systems; Audible personal calling systems
    • G08B3/10Audible signalling systems; Audible personal calling systems using electric transmission; using electromagnetic transmission

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Signal Processing (AREA)
  • Lock And Its Accessories (AREA)

Abstract

The invention discloses a method and a device for determining an intelligent unlocking mode, an intelligent doorbell and a storage medium, belonging to the technical field of artificial intelligence, wherein the method comprises the steps of extracting a target character characteristic value from prestored character images of one or more intelligent devices; constructing an intelligent unlocking mode according to the characteristic value of the target person; when a real-time figure image is shot, extracting a characteristic value to be identified of a figure in the real-time figure image, and determining a corresponding intelligent unlocking mode according to the characteristic value to be identified. Therefore, the intelligent doorbell constructs an intelligent unlocking mode according to the characteristic value of the target person, and extracts the characteristic value to be identified after the real-time person image is shot so as to determine the corresponding intelligent unlocking mode, so that image data do not need to be transmitted, the target person is intelligently identified, data transmission resources are saved, and the safety is improved.

Description

Method and device for determining intelligent unlocking mode, intelligent doorbell and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method and a device for determining an intelligent unlocking mode, an intelligent doorbell and a storage medium.
Background
In the year, intelligent home is developed rapidly, and great convenience is brought to the life of people. However, the current intelligent doorbell needs to transmit image data to the server and the server parses the image data, which consumes a large amount of data transmission resources. In addition, the existing intelligent doorbell is difficult to directly identify target characters such as family members and strangers, so that potential safety hazards exist.
Disclosure of Invention
The invention provides a method and a device for determining an intelligent unlocking mode, an intelligent doorbell and a storage medium, and aims to save data transmission resources and improve safety.
In order to achieve the above object, the present invention provides a method for determining an intelligent unlocking mode, including:
extracting a target character characteristic value from prestored character images of one or more intelligent devices;
constructing an intelligent unlocking mode according to the characteristic value of the target person;
when a real-time figure image is shot, extracting a characteristic value to be identified of a figure in the real-time figure image, and determining a corresponding intelligent unlocking mode according to the characteristic value to be identified.
Preferably, when the real-time person image is shot, the step of extracting the feature value to be identified of the person in the real-time person image and determining the corresponding intelligent unlocking mode according to the feature value to be identified includes:
when a real-time person image is shot, extracting characteristic values to be identified of one or more persons in the real-time person image;
judging whether the characteristic values to be identified of one or more persons fall into a characteristic value range or not;
if the characteristic value to be identified of one or more persons falls into the characteristic value range, entering a direct unlocking mode;
and if the characteristic value to be identified does not fall into the range of the characteristic value, entering an encryption unlocking mode.
Preferably, the characteristic value to be identified comprises a first characteristic value and a first characteristic value;
the step of judging whether the characteristic values to be identified of one or more persons fall into a characteristic value range comprises the following steps:
comparing the first feature value of the one or more people to a first range of feature values;
if none of the first feature values of the one or more people falls within the first feature value range, determining that the first feature value of the one or more people does not fall within the feature value range;
if the first characteristic value of the one or more characters falls into the first characteristic value range, comparing the corresponding second characteristic value with a second characteristic value range;
if the corresponding second characteristic value falls into the second characteristic value range, judging that the characteristic values to be identified of one or more persons fall into the characteristic value range;
and if the corresponding second characteristic value does not fall into the second characteristic value range, judging that the characteristic value to be identified of one or more persons does not fall into the characteristic value range.
Preferably, the step of extracting the target person feature value from the pre-stored person images of one or more smart devices comprises:
extracting one or more face outlines in prestored character images of one or more intelligent devices, and determining one or more target characters according to the face outlines;
and extracting target character characteristic values of the target characters from the pre-stored character images, wherein the target character characteristic values comprise face brightness characteristic values and camera rotation angle characteristic values.
Preferably, the step of extracting a target person feature value of each target person from the pre-stored person image, where the target person feature value includes a face brightness feature value and a camera rotation angle feature value, includes:
extracting the face brightness value of each target person from the pre-stored person image;
respectively acquiring a face brightness total value of the face brightness value of each target person according to the face brightness value;
determining the face brightness characteristic value corresponding to the target person according to the total face brightness value and the number of the face brightness values; and/or
Acquiring a camera rotation angle value of the one or more intelligent devices when shooting each target person in the pre-stored person image;
respectively acquiring a camera rotation angle total value of each target person according to the camera rotation angle values;
and determining the characteristic value of the camera rotation angle corresponding to the target person according to the total value of the camera rotation angles and the number of the camera rotation angle values.
Preferably, after the step of extracting a feature value to be identified of a person in the real-time person image when the real-time person image is captured, and determining a corresponding intelligent unlocking mode according to the feature value to be identified, the method further includes:
sending a corresponding execution request to a door lock based on the intelligent unlocking mode so that the door lock can execute an operation corresponding to the execution request;
and displaying a preset page corresponding to the intelligent unlocking mode on a display screen.
Preferably, after the step of extracting a feature value to be identified of a person in the real-time personal image and determining a corresponding intelligent unlocking mode according to the feature value to be identified when the real-time personal image is shot, the method further includes:
counting the accuracy rate for identifying and/or determining each target person, and comparing the accuracy rate with an accuracy rate threshold value;
if the accuracy is smaller than the accuracy threshold, re-acquiring the secondary target character characteristic value of the corresponding target character, and constructing the intelligent unlocking mode based on the secondary target character characteristic value.
In addition, in order to achieve the above object, the present invention also provides an intelligent unlocking device, including:
an extraction module: the intelligent device is used for extracting target character characteristic values from prestored character images of one or more intelligent devices;
constructing a module: the intelligent unlocking mode is constructed according to the characteristic value of the target person;
an identification module: the intelligent unlocking method comprises the steps of shooting a real-time figure image, extracting characteristic values to be identified of figures in the real-time figure image, and determining a corresponding intelligent unlocking mode according to the characteristic values to be identified.
In addition, in order to achieve the above object, the present invention further provides an intelligent doorbell, which includes a processor, a memory, and an intelligent unlocking mode determining program stored in the memory, where the intelligent unlocking mode determining program implements the steps of the intelligent unlocking mode determining method described above when being executed by the processor.
In addition, to achieve the above object, the present invention further provides a computer storage medium, in which a program for determining an intelligent unlocking mode is stored, and when the program for determining an intelligent unlocking mode is executed by a processor, the steps of the method for determining an intelligent unlocking mode are implemented.
Compared with the prior art, the invention discloses a method and a device for determining an intelligent unlocking mode, an intelligent doorbell and a storage medium, wherein the method comprises the steps of extracting a target character characteristic value from prestored character images of one or more intelligent devices; constructing an intelligent unlocking mode according to the characteristic value of the target person; when a real-time figure image is shot, extracting a characteristic value to be identified of a figure in the real-time figure image, and determining a corresponding intelligent unlocking mode according to the characteristic value to be identified. Therefore, the intelligent doorbell constructs an intelligent unlocking mode according to the characteristic value of the target person, and extracts the characteristic value to be identified after the real-time person image is shot so as to determine the corresponding intelligent unlocking mode, so that image data do not need to be transmitted, the target person is intelligently identified, data transmission resources are saved, and the safety is improved.
Drawings
FIG. 1 is a schematic diagram of a hardware structure of an intelligent doorbell according to various embodiments of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of a method for determining an intelligent unlock mode according to the present invention;
FIG. 3 is a schematic diagram illustrating a first embodiment of a method for determining an intelligent unlock mode according to the present invention;
FIG. 4 is a flowchart illustrating a second embodiment of the method for determining an intelligent unlock mode according to the present invention;
fig. 5 is a functional module schematic diagram of the first embodiment of the intelligent unlocking device of the invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The intelligent doorbell mainly related to the embodiment of the invention is doorbell equipment installed at a doorway, and generally comprises a camera, a microphone, a button and the like.
Referring to fig. 1, fig. 1 is a schematic diagram of a hardware structure of an intelligent doorbell according to embodiments of the present invention. In this embodiment of the present invention, the smart doorbell may include a processor 1001 (e.g., a Central Processing Unit, CPU), a communication bus 1002, an input port 1003, an output port 1004, and a memory 1005. The communication bus 1002 is used for realizing connection communication among the components; the input port 1003 is used for data input; the output port 1004 is used for data output, the memory 1005 may be a high-speed RAM memory, or a non-volatile memory (non-volatile memory), such as a magnetic disk memory, and the memory 1005 may optionally be a storage device independent of the processor 1001. Those skilled in the art will appreciate that the hardware configuration depicted in FIG. 1 is not intended to be limiting of the present invention, and may include more or less components than those shown, or some components in combination, or a different arrangement of components.
With continued reference to fig. 1, the memory 1005 of fig. 1, which is a readable storage medium, may include an operating system, a network communication module, an application program module, and an intelligent unlocking mode determining program. In fig. 1, the network communication module is mainly used for connecting to a server and performing data communication with the server; and the processor 1001 is configured to call the determination procedure of the intelligent unlocking mode stored in the memory 1005 and perform the related operation.
Referring to fig. 2, fig. 2 is a flowchart illustrating a first embodiment of the method for determining an intelligent unlocking mode according to the present invention.
The method for determining the intelligent unlocking mode is applied to an intelligent doorbell and comprises the following steps:
step S101, extracting characteristic values of target characters from prestored character images of one or more intelligent devices;
generally, an intelligent doorbell is an entrance guard management system which is formed by integrating and constructing facilities related to the doorbell by using a residence as a platform and utilizing a comprehensive wiring technology, a network communication technology, a safety precaution technology, an automatic control technology and an audio and video technology, and can improve the safety, convenience, comfortableness and artistry of the residence. Generally, the intelligent doorbell integrates a camera, a human body sensor, a microphone, a sound device and other devices.
In this embodiment, the user will according to the operation manual after the intelligent doorbell installation finishes, through intelligent doorbell's wireless module access network to after privacy settings such as user name, password are carried out as required, intelligent doorbell can put into use.
In this embodiment, the smart doorbell is connected with one or more smart devices in a corresponding house in advance through the internet of things. The intelligent equipment comprises one or more of an intelligent television, an intelligent telephone, a computer, an iPad and an intelligent air conditioner. The development of the Internet of things enables the connection of objects and things, and a plurality of intelligent devices such as an intelligent doorbell can be connected through a network and/or Bluetooth based on the Internet of things.
After the intelligent doorbell is connected with the one or more intelligent devices, the data stored in the one or more intelligent devices can be read according to a connection protocol. In this embodiment, the smart doorbell reads the pre-stored character images in the one or more smart devices. If the figure video data is stored in the intelligent equipment, taking one or more frames of images within a certain time as the pre-stored figure images; and if the figure photo data are stored in the intelligent equipment, directly taking the figure photo data as the pre-stored figure image. It will be appreciated that the smart device will typically include a camera through which an image within a viewing angle, typically including one or more people, can be taken while the smart device is in operation. It is to be understood that, if one or more persons are included in the plurality of predicted personal images, persons having a number of occurrences greater than or equal to a preset number of times may be marked as target persons including family members.
After the target person is determined, face brightness characteristic values in the target person characteristic values of the target person are further extracted from pre-stored person images comprising the target person according to an image characteristic extraction technology. Thereby obtaining the face brightness characteristic values of all the target persons.
In this embodiment, the characteristic value of the rotation angle of the camera in the characteristic value of the target person may be obtained from parameters in the pre-stored person image, and generally, the parameters of the pre-stored person image include parameters such as shooting time, exposure, pixels, and shooting location. The parameter may further include a camera rotation angle, and the camera rotation angle includes a horizontal rotation angle and a vertical rotation angle. Most shooting devices (such as the smart device in this embodiment) are configured with a gyroscope sensor, a hall sensor, an angle converter, and the like, and can acquire a rotation angle of a camera when an image is shot, record the rotation angle through a processor of the shooting smart device, and store the rotation angle in correspondence with the shot image (such as a pre-stored person image in this embodiment) so as to be queried and extracted when necessary. In addition, the target person feature value may also be a face key point feature value and/or a pupil feature value.
The technical scheme is based on the Internet of things, the target person is determined from the pre-stored person image of the related intelligent equipment, and the characteristic value of the target person is obtained. Compared with the traditional method, the technical scheme provided by the embodiment omits the complex work of image acquisition, identification, confirmation and the like on the target person by means of the Internet of things.
It can be understood that the intelligent doorbell can also directly acquire a person image through a camera, and extract a target person characteristic value based on the acquired person image.
Furthermore, the intelligent doorbell can also directly receive one or more target person images specified by a user through voice or touch operation, and then extract corresponding target person characteristic values from the target person images.
Furthermore, after the intelligent doorbell works for a certain time, a target person can be determined from the person image acquired by the intelligent doorbell camera through machine learning, and a characteristic value of the target person is extracted.
Furthermore, after the target person and the target person feature value corresponding to the target person are determined, one or more target persons and one or more target person feature values corresponding to the target person may be intelligently added, deleted, and replaced.
Step S102, an intelligent unlocking mode is constructed according to the characteristic value of the target person;
after the target person feature value is obtained, an intelligent unlocking mode can be constructed based on the target person feature value. In this embodiment, the intelligent unlocking mode is mainly used for distinguishing a target person from a stranger according to the person characteristic value and obtaining a corresponding unlocking mode.
The intelligent unlocking mode includes a direct unlocking mode and an encryption unlocking mode, and the step S102: the step of constructing an intelligent unlocking mode according to the characteristics of the target person comprises the following steps:
setting a characteristic value range of a direct unlocking mode according to the characteristic value of the target person, and judging whether the characteristic value to be identified falls into the characteristic value range or not after the characteristic value to be identified is obtained; if the characteristic value to be identified falls into the characteristic value to be identified in the characteristic value range, entering a direct unlocking mode; and if the characteristic value to be identified does not fall into the characteristic value range, entering an encryption unlocking mode.
In this embodiment, the range of the characteristic value is set according to specific needs. Generally, the images of people obtained under different light intensity, distance, angle and shooting mode are different, so that people identification has a certain fault tolerance rate. If different colors of clothes are worn for the same target person, the brightness of the face of the target person is different. In addition, the brightness of the human image is affected by the ambient light, and the brightness of the human image shot in different environments such as rainy days, sunny days, daytime and night can be distinguished. Therefore, a range of characteristic values needs to be set, and the identification accuracy is increased under a reasonable fault-tolerant condition. The range of characteristic values may be within a range, such as ± 10%, ± 15%, ± 25%, etc., centered on the target person characteristic value. It is understood that if there are a plurality of target persons, there are a corresponding number of ranges of eigenvalues.
In this embodiment, after the intelligent doorbell operates, the person image can be shot, and after the feature extraction is performed on the person in the person image, the feature value to be identified is obtained. And comparing the characteristic value to be identified with the characteristic value range, judging whether the characteristic value to be identified falls into the characteristic value range, and if so, entering a direct unlocking mode.
It can be understood that if the feature value to be recognized does not fall within the feature value range, it indicates that the person corresponding to the feature value to be recognized is not the target person and is likely to be a stranger, and therefore if the feature value to be recognized does not fall within the feature value range, the encryption unlocking mode is entered.
Furthermore, pets such as cats, dogs and the like are raised in many families, and in order to facilitate the pets to enter and exit from the house, pet characteristic values can be extracted from prestored animal images of one or more intelligent devices, and corresponding pet unlocking modes are constructed according to the pet characteristic values. The construction of the pet unlocking mode is basically the same as that of the intelligent unlocking mode, and is not repeated here.
Step S103, when a real-time character image is shot, extracting a characteristic value to be identified of a character in the real-time character image, and determining a corresponding intelligent unlocking mode according to the characteristic value to be identified.
When the intelligent doorbell works, people appearing in the camera shooting range are monitored based on the intelligent unlocking mode, the intelligent unlocking mode is determined based on the monitoring result, generally, the intelligent doorbell is directly unlocked if a target person is the target person, the intelligent doorbell is not directly unlocked if a stranger is the stranger, and related information of the stranger can be sent to the bound intelligent equipment so that a user can return a corresponding unlocking instruction according to the related information of the stranger.
Specifically, the step S103: when a real-time figure image is shot, the steps of extracting the characteristic value to be identified of a figure in the real-time figure image and determining a corresponding intelligent unlocking mode according to the characteristic value to be identified comprise:
step S1030: when a real-time person image is shot, extracting characteristic values to be identified of one or more persons in the real-time person image;
the intelligent doorbell can monitor personnel conditions at a residential door in work, and a camera of the intelligent doorbell can continuously shoot videos or photos. When the image shot by the intelligent doorbell in real time comprises people, performing feature extraction on the shot real-time people image, and extracting the characteristic value to be identified of the people in the real-time people image, wherein the characteristic value to be identified is consistent with the characteristic quantity and the characteristic content of the target people characteristic value corresponding to the intelligent unlocking mode. For example, if the feature of the target person feature value is face brightness, the feature value to be recognized is also face brightness; if the feature quantity of the feature value of the target person is 2, the feature quantity of the feature value to be identified is also 2.
Step S1031: judging whether the characteristic values to be identified of one or more persons fall into a characteristic value range or not;
in this embodiment, the feature value to be recognized is compared with a preset feature value range, and whether the feature value to be recognized of one or more persons falls into the feature value range is determined according to a comparison result. The characteristic value to be identified comprises a first characteristic value and a second characteristic value. The first characteristic value and the second characteristic value are set according to specific requirements. In this implementation, the first feature value may be a face brightness feature value, and the second feature value may be a camera rotation angle feature value; the first feature value may be a face key point feature value, and the second feature value may be a lip feature value. Each target person has a corresponding set of first and second feature values.
Specifically, the step S1031: the step of judging whether the characteristic values to be identified of one or more persons fall into a characteristic value range comprises the following steps:
step S1031 a: comparing the first feature value of the one or more people to a first range of feature values;
and comparing the first characteristic values of the one or more persons with the first characteristic value ranges respectively, and judging whether the first characteristic values of the one or more persons fall into the characteristic value ranges according to the comparison result.
Step S1031 b: if none of the first feature values of the one or more people falls within the first feature value range, determining that the first feature value of the one or more people does not fall within the feature value range;
step S1031 c: if the first characteristic value of the one or more characters falls into the first characteristic value range, comparing the corresponding second characteristic value with a second characteristic value range;
in this embodiment, if there are multiple persons in the real-time person image, the first feature value of each person is extracted to obtain multiple first feature values. If at least one of the first feature values falls within the first feature value range, it indicates that one or more target persons may exist in the persons, and marks the corresponding person as a candidate target person. To further determine whether a target person exists among the plurality of target persons, a second feature value corresponding to the candidate target person is further compared with a second feature value range. If the first characteristic value of the character A falls into the first characteristic value range, the character A is marked as a candidate target character, and the second characteristic value of the candidate target character A is further compared with the corresponding second characteristic value range.
In this embodiment, the second feature value may be extracted together with the first feature value, or after the first feature value is verified, if the first feature value falls into the first feature range, the corresponding second feature value may be extracted.
Step S1031 d: if the corresponding second characteristic value falls into the second characteristic value range, judging that the characteristic values to be identified of one or more persons fall into the characteristic value range;
and if the second characteristic value corresponding to the candidate target character falls into the corresponding second characteristic value range, judging that the characteristic value to be identified of one or more characters falls into the characteristic value range. And marking the candidate target person as a target person.
Step S1031 e: and if the corresponding second characteristic value does not fall into the second characteristic value range, judging that the characteristic value to be identified of one or more persons does not fall into the characteristic value range.
And if the corresponding second characteristic value does not fall into the second characteristic value range, marking the candidate target character as a non-target character.
Further, after the candidate target person is marked as a non-target person, whether the person to be identified is a reservation person is further judged. Specifically, reservation information of the current time period is acquired from other intelligent devices associated through a physical network, and reservation personnel information is acquired according to the reservation information. For example, if a takeaway order is obtained from a related mobile phone in the current time period, a distributor of the takeaway order is marked as a reservation person. And if the person to be identified is the reservation person, determining the intelligent unlocking mode as a direct unlocking mode.
Step S1032: if the characteristic value to be identified of one or more persons falls into the characteristic value range, entering a direct unlocking mode;
if the characteristic value to be identified of one or more persons falls into the characteristic value range, the one or more persons include one or more target persons, and because the family members and the like stored in advance in the target persons confirm safe persons, the unlocking mode can be directly entered.
Step S1033: and if the characteristic value to be identified does not fall into the range of the characteristic value, entering an encryption unlocking mode.
If the characteristic value to be identified does not fall into the characteristic value range, the target person is not included in the one or more persons. Therefore, it is difficult to determine the identity of the person in the real-time person image and to ensure the security, so the encryption unlocking mode is entered.
In other embodiments, the number of features of the feature value to be identified may also be one or more. The corresponding intelligent unlocking mode can be specifically set according to the relation between the characteristic quantity of the characteristic value to be identified and the characteristic content of the characteristic value.
When the real-time character image is shot, the steps of extracting the characteristic values to be identified of the characters in the real-time character image and determining the corresponding intelligent unlocking mode strategy according to the characteristic values to be identified further comprise the following steps:
step S1041: sending a corresponding execution request to a door lock based on the intelligent unlocking mode so that the door lock can execute an operation corresponding to the execution request;
the door lock can be a component of the intelligent doorbell, and can also be an intelligent device connected with the intelligent doorbell. And after the intelligent unlocking mode is determined, sending a corresponding execution request to the corresponding door lock so that the door lock can execute the operation corresponding to the execution request. And if the intelligent unlocking mode is the direct unlocking mode, sending an unlocking execution request, and directly unlocking the door lock according to the unlocking execution request by the door lock. And if the intelligent unlocking mode is the encryption unlocking mode, sending an encryption execution request, and locking the door lock reversely according to the encryption execution request.
Further, the door lock can play a voice response according to the execution request. For example, when the unlocking execution request is received, a voice welcoming home can be played after unlocking.
Step S1042: and displaying a preset page corresponding to the intelligent unlocking mode on a display screen.
Generally, the intelligent doorbell further comprises a corresponding display device, and the display device may be a pre-bound mobile phone, an iPad and other devices, or an independent display screen. And displaying a preset page corresponding to the intelligent unlocking mode on a display screen. For example, if the mode is the direct unlocking mode, the preset page can be a photo of a target person and a welcome language; if the mode is the encryption unlocking mode, the preset page can comprise warning information.
Further, after the step of extracting a feature value to be identified of a person in the real-time person image when the real-time person image is shot, and determining a corresponding intelligent unlocking mode according to the feature value to be identified, the method further includes:
counting the accuracy rate for identifying and/or determining each target person, and comparing the accuracy rate with an accuracy rate threshold value;
if the accuracy is smaller than the accuracy threshold, re-acquiring the secondary target character characteristic value of the corresponding target character, and constructing the intelligent unlocking mode based on the secondary target character characteristic value.
It can be understood that, if the target character includes a child, the child grows fast, and the corresponding characteristic value of the rotation angle of the camera changes after the child grows high. If a person in the target person is ill, losing weight, beautifying, etc., the face brightness characteristic value may be changed. Therefore, the target person feature value in the intelligent unlocking mode needs to be updated regularly or irregularly.
In this embodiment, the accuracy of identifying and/or determining each target person by the intelligent unlocking mode within a preset time period is counted, where identifying refers to identifying a target characteristic value of the target person in the real-time person image, and determining refers to determining a direct unlocking mode or an encrypted unlocking mode based on the identified target characteristic value. The preset time period may be specifically set as required, for example, the preset time period is set to 30 days, 60 days, 90 days, and the like. Generally, if the smart doorbell does not accurately identify the feature value of the target person and/or determine the smart unlocking mode, the target person may unlock the door lock based on a preset door lock verification manner such as a password, a fingerprint, a key, and the like. And if the target person passes the door lock verification to unlock the door lock, the intelligent doorbell receives unlocking information sent by the door lock, and marks the identification and/or determination as failure, so that the accuracy can be obtained based on the failure times and the total times of each target person in a preset time period. Or receiving a feedback result of the user for the intelligent doorbell identification and/or determination through a preset feedback channel, and calculating the accuracy rate based on the feedback result. After the accuracy of each target person is obtained, comparing the accuracy with an accuracy threshold; if the accuracy is smaller than the accuracy threshold, re-acquiring the secondary target character characteristic value of the corresponding target character, and constructing the intelligent unlocking mode based on the secondary target character characteristic value. Wherein, the accuracy threshold can be set according to the requirement, for example, set to 90%, 85%, etc. The secondary target character characteristic value comprises a face brightness characteristic value and a camera rotation angle characteristic value, and the corresponding intelligent unlocking mode comprises a direct unlocking mode and an encryption unlocking mode. The specific operation of constructing the intelligent unlocking mode based on the secondary target person feature value is substantially the same as the operation in step S102 in this embodiment, and details are not repeated here. Further, if the intelligent unlocking mode is the encryption unlocking mode, whether dangerous goods such as guns, controlled knives and gasoline are worn by people to be identified is further checked through a configured sensor. And if the dangerous goods are detected, sending early warning information to other intelligent equipment associated through a physical network, or starting an alarm program.
Further, referring to fig. 3, fig. 3 is a schematic view of a scenario of the method for determining an intelligent unlocking mode according to the first embodiment of the present invention. As shown in fig. 3, a woman in the door swipes his mobile phone on a sofa in a satisfactory manner, a man wearing a helmet waits at the door, and at the moment, the intelligent doorbell shoots an image of the man through a camera and extracts the feature to be identified of the man. And if the man is judged to be the target person according to the characteristics to be identified, the man enters a direct unlocking mode in the intelligent unlocking mode, and the ladies in the door do not need to get up to open the door. If the man is judged to be a stranger according to the characteristics to be identified, the man enters an encryption unlocking mode in the intelligent unlocking mode, so that the personal safety of the lady and the safety of property in the house can be ensured.
According to the scheme, the target character characteristic value is extracted from the pre-stored character images of one or more intelligent devices; constructing an intelligent unlocking mode according to the characteristic value of the target person; when a real-time figure image is shot, extracting a characteristic value to be identified of a figure in the real-time figure image, and determining a corresponding intelligent unlocking mode according to the characteristic value to be identified. Therefore, the intelligent doorbell constructs an intelligent unlocking mode according to the characteristic value of the target person, and extracts the characteristic value to be identified after the real-time person image is shot so as to determine the corresponding intelligent unlocking mode, so that image data do not need to be transmitted, the target person is intelligently identified, data transmission resources are saved, and the safety is improved.
A second embodiment of the present invention is proposed based on the first embodiment shown in fig. 2 described above. Referring to fig. 4, fig. 4 is a flowchart illustrating a second embodiment of the method for determining an intelligent unlocking mode according to the present invention.
The step of extracting the characteristic value of the target person from the pre-stored person images of one or more intelligent devices comprises the following steps:
step S201: extracting one or more face outlines in prestored character images of one or more intelligent devices, and determining one or more target characters according to the face outlines;
in this embodiment, one house is taken as a unit, and one or more intelligent devices in the house are obtained based on the internet of things, and the one or more intelligent devices are generally used by family members. And after the figure images prestored in the one or more intelligent devices are obtained, one or more face outlines in the predicted figure images are obtained, and one or more target figures are determined according to the face outlines. Generally, each person has a different face contour, and if a plurality of different face contours are obtained, it is indicated that there are a corresponding number of target persons. For example, 5 facial contours are extracted from a large number of pre-stored character images, indicating that there may be 5 family members in the family.
Step S202: and extracting target character characteristic values of the target characters from the pre-stored character images, wherein the target character characteristic values comprise face brightness characteristic values and camera rotation angle characteristic values.
In this embodiment, the face brightness characteristic value and the camera rotation angle characteristic value are used as the target person characteristic value. Due to factors such as genetics and lifestyle, the facial brightness of the same family member may be similar, but particularly the facial brightness characteristic value may be different and sufficient to distinguish the individual family members. The rotation angle of the camera can be used for measuring the height of a person, and generally, the higher the height is, the larger the value of a black pixel in a corresponding person image in the horizontal direction is; conversely, the shorter the height, the larger the value of the black pixel in the corresponding person image in the vertical direction. Corresponding to different heights, the rotation angles of the camera when shooting different people are different, and the rotation angles comprise a vertical rotation angle and a horizontal rotation angle.
The step of extracting the target character characteristic value of each target character from the pre-stored character image, wherein the target character characteristic value comprises a face brightness characteristic value and a camera rotation angle characteristic value, and the step comprises the following steps:
step S202a 1: extracting the face brightness value of each target person from the pre-stored person image;
and extracting the face brightness value of each target person from the pre-stored person image by using a face brightness extraction technology. It is understood that each target person in each pre-stored person image has a corresponding face brightness value. In the present embodiment, the pre-stored character images are stored in the memory 11 in advance, and the face brightness value of each target character is stored in the memory 22.
It can be understood that, in order to reduce the workload of face brightness extraction, gamma conversion may be performed on the gray-scale values of the pre-stored person images to obtain corresponding gamma values, and brightness extraction may be performed on the regions corresponding to the pixel points whose gamma values are greater than 1 to further obtain the face brightness values of one or more target persons in each pre-stored person image.
Step S202a 2: respectively acquiring a face brightness total value of the face brightness value of each target person according to the face brightness value;
the face brightness value of each target person stored in the memory 22 is counted to obtain a total face brightness value of each target person. Specifically, the corresponding face brightness values are grouped according to the target person to obtain a plurality of groups of face brightness values. The number of sets of face brightness values corresponds to the number of target persons. The face luminance values of each group are then summed to obtain one or more total face luminance values, which are stored in memory 33.
Step S202a 3: determining the face brightness characteristic value corresponding to the target person according to the total face brightness value and the number of the face brightness values; and/or
The average value of the face brightness values of the target person is calculated based on the total face brightness value of each target person stored in the memory 33 and the number of face brightness values thereof, and the average value is determined as the face brightness feature value of the corresponding target person. The face brightness characteristic value of each target person is obtained by the method. The face brightness feature value is stored in the memory 44. It should be noted that the memory 11, the memory 22, the memory 33, and the memory 44 may be a plurality of independent memories of the smart doorbell, or may be a plurality of partitions of the memory of the smart doorbell.
Step S202b 1: acquiring a camera rotation angle value of the one or more intelligent devices when shooting each target person in the pre-stored person image;
the smart device may be a smart television. When the intelligent device shoots people with different heights, the rotation angle of the camera can be different. In the present embodiment, a pre-stored character image for extracting a characteristic value of a camera rotation angle is stored in the memory 55 in advance. And acquiring the camera rotation angle value of the one or more intelligent devices when shooting each target person in the pre-stored person image according to the parameters of the pre-stored person image. The rotation angle value is stored in the memory 66.
Step S202b 2: respectively acquiring a camera rotation angle total value of each target person according to the camera rotation angle values;
and grouping the rotation angle values stored in the memory 66 according to different target characters to obtain a plurality of groups of camera rotation angle values. And according to the grouped camera rotation angle values, summing the camera rotation angles of the target characters respectively to obtain the total camera rotation angle value of each target character.
Step S202b 3: and determining the characteristic value of the camera rotation angle corresponding to the target person according to the total value of the camera rotation angles and the number of the camera rotation angle values.
And dividing the total value of the rotation angles of the cameras by the number of the rotation angle values of the corresponding cameras to obtain an average value of the rotation angles of the cameras, and marking the average value of the rotation angles of the cameras as the characteristic value of the rotation angles of the cameras. The camera rotation angle characteristic value is stored in the memory 77.
When a camera of the intelligent doorbell shoots a real-time figure image, extracting a brightness characteristic value of a face to be identified of a figure in the real-time figure image and/or a rotation angle characteristic value of the camera to be identified.
Comparing the face brightness characteristic value to be recognized with the face brightness characteristic value stored in the memory 44, if the face brightness characteristic value to be recognized is within the face brightness characteristic threshold, preliminarily judging that the person in the real-time person image is possibly a family member, and marking the person as a candidate target person. The face brightness feature threshold is set based on the face feature value and its fault tolerance range.
Further, a first message may be sent to a corresponding door lock, so that the door lock enters a mode to be unlocked according to the first message, and the mode to be unlocked of the door lock may be a mode that can be unlocked only by manual rotation.
On the contrary, if the face brightness characteristic value to be recognized is not within the face brightness characteristic threshold, the person in the real-time person image is judged not to be a family member, and the person is marked as a non-candidate target person. And enters an encryption unlock mode.
Further, after the person is marked as a candidate target person, the feature value of the to-be-recognized camera rotation angle of the candidate target person is compared with the feature value of the camera rotation angle stored in the memory 77, and if the feature value of the to-be-recognized camera rotation angle is within the threshold value of the camera rotation angle feature, it is determined that the person in the real-time person image is the target person. And entering a direct unlocking mode. And otherwise, if the characteristic value of the rotation angle of the camera to be identified is not within the characteristic threshold value of the rotation angle of the camera, judging that the person in the real-time person image is not the target person, and entering an encryption unlocking mode. The threshold value of the rotation angle of the camera is set according to the rotation angle of the camera and the fault-tolerant range of the camera.
According to the scheme, one or more face outlines in pre-stored character images of one or more intelligent devices are extracted, and one or more target characters are determined according to the face outlines; and extracting target character characteristic values of the target characters from the pre-stored character images, wherein the target character characteristic values comprise face brightness characteristic values and camera rotation angle characteristic values. Therefore, the target person is determined, and then the face brightness characteristic value and the camera rotation angle characteristic value of the target person are extracted, so that the person to be recognized can be subjected to dual recognition according to the face brightness characteristic value to be recognized and the camera rotation angle characteristic value to be recognized, and the safety of the access control system is improved.
In addition, the embodiment of the invention also provides an intelligent unlocking device. Specifically, referring to fig. 5, fig. 5 is a functional module schematic diagram of a first embodiment of the intelligent unlocking device of the present invention, where the intelligent unlocking device includes:
the extraction module 10: the intelligent device is used for extracting target character characteristic values from prestored character images of one or more intelligent devices;
the building block 20: the intelligent unlocking mode is constructed according to the characteristic value of the target person;
the identification module 30: the intelligent unlocking method comprises the steps of shooting a real-time figure image, extracting characteristic values to be identified of figures in the real-time figure image, and determining a corresponding intelligent unlocking mode according to the characteristic values to be identified.
Further, the building module further comprises:
further, the identification module includes:
the real-time character image recognition device comprises an extraction unit, a recognition unit and a recognition unit, wherein the extraction unit is used for extracting characteristic values to be recognized of one or more characters in the real-time character image when the real-time character image is shot;
the judging unit is used for judging whether the characteristic values to be identified of one or more persons fall into a characteristic value range or not;
the first entering unit is used for entering a direct unlocking mode if the characteristic values to be identified of one or more persons fall into the characteristic value range;
and the second entering unit is used for entering an encryption unlocking mode if the characteristic value to be identified does not fall into the range of the characteristic value.
Further, the determination unit includes:
a comparison subunit, configured to compare the first feature value of the one or more people with a first feature value range;
a first determining subunit, configured to determine that the first feature value of one or more people does not fall within the feature value range if none of the first feature values of the one or more people falls within the first feature value range;
the comparison subunit is configured to compare a corresponding second feature value with a second feature value range if the first feature value of the one or more people falls into the first feature value range;
a second determining subunit, configured to determine that the feature values to be identified of one or more people fall within the feature value range if the corresponding second feature value falls within the second feature value range;
and the third judging subunit is configured to judge that the feature value to be identified of one or more people does not fall within the feature value range if the corresponding second feature value does not fall within the second feature value range.
The extraction module comprises:
the intelligent device comprises a first extraction unit, a second extraction unit and a third extraction unit, wherein the first extraction unit is used for extracting one or more face outlines in prestored character images of one or more intelligent devices and determining one or more target characters according to the face outlines;
and the second extraction unit is used for extracting target character characteristic values of all the target characters from the pre-stored character images, wherein the target character characteristic values comprise face brightness characteristic values and camera rotation angle characteristic values.
Further, the second extraction unit includes:
a first extraction subunit, configured to extract a face brightness value of each of the target persons from the pre-stored person image;
the first acquiring subunit is configured to acquire a total face brightness value of the face brightness values of each of the target persons according to the face brightness values;
the first determining subunit is used for determining the face brightness characteristic value corresponding to the target person according to the total face brightness value and the number of the face brightness values; and/or
The second acquiring subunit is used for acquiring the camera rotation angle value of the one or more intelligent devices when shooting each target person in the pre-stored person image;
the third acquisition subunit is used for respectively acquiring the total camera rotation angle value of each target person according to the camera rotation angle values;
and the second determining subunit is used for determining the characteristic value of the camera rotation angle corresponding to the target person according to the total value of the camera rotation angles and the number of the camera rotation angle values.
Further, the identification module further comprises:
the sending unit is used for sending a corresponding execution request to the door lock based on the intelligent unlocking mode so that the door lock can execute the operation corresponding to the execution request;
and the display unit is used for displaying the preset page corresponding to the intelligent unlocking mode on a display screen.
Further, the identification module further comprises:
the statistical unit is used for counting the accuracy rate for identifying and/or determining each target person and comparing the accuracy rate with an accuracy rate threshold value;
and the determining unit is used for re-acquiring the secondary target person characteristic value of the corresponding target person if the accuracy is smaller than the accuracy threshold, and constructing the intelligent unlocking mode based on the secondary target person characteristic value.
In addition, an embodiment of the present invention further provides a computer storage medium, where a determination program of an intelligent unlocking mode is stored on the computer storage medium, and when the determination program of the intelligent unlocking mode is executed by a processor, the steps of the method for determining the intelligent unlocking mode are implemented, which are not described herein again.
Compared with the prior art, the method and the device for determining the intelligent unlocking mode, the intelligent doorbell and the storage medium provided by the invention comprise the steps of extracting a target character characteristic value from prestored character images of one or more intelligent devices; constructing an intelligent unlocking mode according to the characteristic value of the target person; when a real-time figure image is shot, extracting a characteristic value to be identified of a figure in the real-time figure image, and determining a corresponding intelligent unlocking mode according to the characteristic value to be identified. Therefore, the intelligent doorbell constructs an intelligent unlocking mode according to the characteristic value of the target person, and extracts the characteristic value to be identified after the real-time person image is shot so as to determine the corresponding intelligent unlocking mode, so that image data do not need to be transmitted, the target person is intelligently identified, data transmission resources are saved, and the safety is improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for causing a terminal device to execute the method according to the embodiments of the present invention.
The above description is only for the preferred embodiment of the present invention and is not intended to limit the scope of the present invention, and all equivalent structures or flow transformations made by the present specification and drawings, or applied directly or indirectly to other related arts, are included in the scope of the present invention.

Claims (10)

1. A method for determining an intelligent unlocking mode is characterized by comprising the following steps:
extracting a target character characteristic value from prestored character images of one or more intelligent devices;
constructing an intelligent unlocking mode according to the characteristic value of the target person;
when a real-time figure image is shot, extracting a characteristic value to be identified of a figure in the real-time figure image, and determining a corresponding intelligent unlocking mode according to the characteristic value to be identified.
2. The method of claim 1, wherein the step of extracting the feature value to be identified of the person in the live person image when the live person image is captured, and determining the corresponding intelligent unlocking mode according to the feature value to be identified comprises:
when a real-time person image is shot, extracting characteristic values to be identified of one or more persons in the real-time person image;
judging whether the characteristic values to be identified of one or more persons fall into a characteristic value range or not;
if the characteristic value to be identified of one or more persons falls into the characteristic value range, entering a direct unlocking mode;
and if the characteristic value to be identified does not fall into the range of the characteristic value, entering an encryption unlocking mode.
3. The method according to claim 2, wherein the feature value to be identified comprises a first feature value and a first feature value;
the step of judging whether the characteristic values to be identified of one or more persons fall into a characteristic value range comprises the following steps:
comparing the first feature value of the one or more people to a first range of feature values;
if none of the first feature values of the one or more people falls within the first feature value range, determining that the first feature value of the one or more people does not fall within the feature value range;
if the first characteristic value of the one or more characters falls into the first characteristic value range, comparing the corresponding second characteristic value with a second characteristic value range;
if the corresponding second characteristic value falls into the second characteristic value range, judging that the characteristic values to be identified of one or more persons fall into the characteristic value range;
and if the corresponding second characteristic value does not fall into the second characteristic value range, judging that the characteristic value to be identified of one or more persons does not fall into the characteristic value range.
4. The method of claim 1, wherein the step of extracting the target person feature value from the pre-stored person images of the one or more smart devices comprises:
extracting one or more face outlines in prestored character images of one or more intelligent devices, and determining one or more target characters according to the face outlines;
and extracting target character characteristic values of the target characters from the pre-stored character images, wherein the target character characteristic values comprise face brightness characteristic values and camera rotation angle characteristic values.
5. The method of claim 4, wherein the step of extracting target person feature values of each of the target persons from the pre-stored person images, the target person feature values including face brightness feature values and camera rotation angle feature values comprises:
extracting the face brightness value of each target person from the pre-stored person image;
respectively acquiring a face brightness total value of the face brightness value of each target person according to the face brightness value;
determining the face brightness characteristic value corresponding to the target person according to the total face brightness value and the number of the face brightness values; and/or
Acquiring a camera rotation angle value of the one or more intelligent devices when shooting each target person in the pre-stored person image;
respectively acquiring a camera rotation angle total value of each target person according to the camera rotation angle values;
and determining the characteristic value of the camera rotation angle corresponding to the target person according to the total value of the camera rotation angles and the number of the camera rotation angle values.
6. The method of claim 1, wherein the step of extracting the feature value to be identified of the person in the live image when the live image is captured and determining the corresponding smart unlocking mode policy according to the feature value to be identified further comprises:
sending a corresponding execution request to a door lock based on the intelligent unlocking mode so that the door lock can execute an operation corresponding to the execution request;
and displaying a preset page corresponding to the intelligent unlocking mode on a display screen.
7. The method of claim 1, wherein the step of extracting the feature value to be identified of the person in the live person image when the live person image is captured, and determining the corresponding intelligent unlocking mode according to the feature value to be identified further comprises:
counting the accuracy rate for identifying and/or determining each target person, and comparing the accuracy rate with an accuracy rate threshold value;
if the accuracy is smaller than the accuracy threshold, re-acquiring the secondary target character characteristic value of the corresponding target character, and constructing the intelligent unlocking mode based on the secondary target character characteristic value.
8. The utility model provides an intelligence unlocking means which characterized in that, intelligence unlocking means includes:
an extraction module: the intelligent device is used for extracting target character characteristic values from prestored character images of one or more intelligent devices;
constructing a module: the intelligent unlocking mode is constructed according to the characteristic value of the target person;
an identification module: the intelligent unlocking method comprises the steps of shooting a real-time figure image, extracting characteristic values to be identified of figures in the real-time figure image, and determining a corresponding intelligent unlocking mode according to the characteristic values to be identified.
9. An intelligent doorbell, characterized in that the intelligent doorbell comprises a processor, a memory and a program for determining an intelligent unlocking mode stored in the memory, which program for determining an intelligent unlocking mode, when executed by the processor, implements the steps of the method for determining an intelligent unlocking mode according to any one of claims 1-7.
10. A computer storage medium, characterized in that the computer storage medium has stored thereon a program for determining an intelligent unlocking pattern, which when executed by a processor implements the steps of the method for determining an intelligent unlocking pattern according to any one of claims 1 to 7.
CN202010371535.5A 2020-04-30 2020-04-30 Method and device for determining intelligent unlocking mode, intelligent doorbell and storage medium Active CN112333418B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010371535.5A CN112333418B (en) 2020-04-30 2020-04-30 Method and device for determining intelligent unlocking mode, intelligent doorbell and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010371535.5A CN112333418B (en) 2020-04-30 2020-04-30 Method and device for determining intelligent unlocking mode, intelligent doorbell and storage medium

Publications (2)

Publication Number Publication Date
CN112333418A true CN112333418A (en) 2021-02-05
CN112333418B CN112333418B (en) 2023-05-23

Family

ID=74303601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010371535.5A Active CN112333418B (en) 2020-04-30 2020-04-30 Method and device for determining intelligent unlocking mode, intelligent doorbell and storage medium

Country Status (1)

Country Link
CN (1) CN112333418B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001175911A (en) * 1999-12-17 2001-06-29 Glory Ltd Method and device for discriminating true/false coin from picture image
JP2009081527A (en) * 2007-09-25 2009-04-16 Noritsu Koki Co Ltd Face photographing apparatus
WO2011121688A1 (en) * 2010-03-30 2011-10-06 パナソニック株式会社 Face recognition device and face recognition method
JP2013066190A (en) * 2012-10-30 2013-04-11 Tatsumi Denshi Kogyo Kk Automatic photograph preparation device, image processing device, image processing method, and image processing program
CN104637189A (en) * 2013-11-07 2015-05-20 天津市鑫锚电子科技工程有限公司 ATM help seeking terminal
CN106878681A (en) * 2017-02-28 2017-06-20 盐城工学院 Doorbell face identification method, device and door bell and button system
CN107767325A (en) * 2017-09-12 2018-03-06 深圳市朗形网络科技有限公司 Method for processing video frequency and device
CN109243011A (en) * 2018-07-24 2019-01-18 胡渐佳 Intelligent door lock biological information method for unlocking and intelligent door lock
CN109472907A (en) * 2018-12-27 2019-03-15 深圳市多度科技有限公司 The current method, apparatus of visitor and door access machine are controlled in door access machine
CN110047173A (en) * 2019-02-25 2019-07-23 深圳市赛亿科技开发有限公司 The control method and control system of intelligent door lock, computer readable storage medium
CN110298949A (en) * 2019-07-08 2019-10-01 珠海格力电器股份有限公司 Door lock control method and device, storage medium and door lock
CN110647865A (en) * 2019-09-30 2020-01-03 腾讯科技(深圳)有限公司 Face gesture recognition method, device, equipment and storage medium
CN110769148A (en) * 2018-07-27 2020-02-07 北京蜂盒科技有限公司 Camera automatic control method and device based on face recognition

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001175911A (en) * 1999-12-17 2001-06-29 Glory Ltd Method and device for discriminating true/false coin from picture image
JP2009081527A (en) * 2007-09-25 2009-04-16 Noritsu Koki Co Ltd Face photographing apparatus
WO2011121688A1 (en) * 2010-03-30 2011-10-06 パナソニック株式会社 Face recognition device and face recognition method
JP2013066190A (en) * 2012-10-30 2013-04-11 Tatsumi Denshi Kogyo Kk Automatic photograph preparation device, image processing device, image processing method, and image processing program
CN104637189A (en) * 2013-11-07 2015-05-20 天津市鑫锚电子科技工程有限公司 ATM help seeking terminal
CN106878681A (en) * 2017-02-28 2017-06-20 盐城工学院 Doorbell face identification method, device and door bell and button system
CN107767325A (en) * 2017-09-12 2018-03-06 深圳市朗形网络科技有限公司 Method for processing video frequency and device
CN109243011A (en) * 2018-07-24 2019-01-18 胡渐佳 Intelligent door lock biological information method for unlocking and intelligent door lock
CN110769148A (en) * 2018-07-27 2020-02-07 北京蜂盒科技有限公司 Camera automatic control method and device based on face recognition
CN109472907A (en) * 2018-12-27 2019-03-15 深圳市多度科技有限公司 The current method, apparatus of visitor and door access machine are controlled in door access machine
CN110047173A (en) * 2019-02-25 2019-07-23 深圳市赛亿科技开发有限公司 The control method and control system of intelligent door lock, computer readable storage medium
CN110298949A (en) * 2019-07-08 2019-10-01 珠海格力电器股份有限公司 Door lock control method and device, storage medium and door lock
CN110647865A (en) * 2019-09-30 2020-01-03 腾讯科技(深圳)有限公司 Face gesture recognition method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JAYAVARDHANA GUBBI等: "Internet of Things (IoT): A vision, architectural elements, and future directions[" *
胡国亨: "基于嵌入式的家用智能门禁系统的研究" *

Also Published As

Publication number Publication date
CN112333418B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CN110491004B (en) Resident community personnel safety management system and method
US9911294B2 (en) Warning system and method using spatio-temporal situation data
CN206515931U (en) A kind of face identification system
US20220366697A1 (en) Image processing method and apparatus, electronic device and storage medium
CN106204815A (en) A kind of gate control system based on human face detection and recognition
CN109299683A (en) A kind of security protection assessment system based on recognition of face and behavior big data
CN112183265A (en) Electric power construction video monitoring and alarming method and system based on image recognition
US20130216107A1 (en) Method of surveillance by face recognition
CN106204948A (en) Locker management method and locker managing device
CN206162736U (en) Access control system based on face recognition
CN109495727B (en) Intelligent monitoring method, device and system and readable storage medium
CN104933791A (en) Intelligent security control method and equipment
CN110956768A (en) Automatic anti-theft device of intelligence house
WO2022121498A1 (en) Identity recognition method, model training method, apparatuses, and device and storage medium
CN109359712A (en) Electric operating information dynamic collection monitoring device and its application method
CN108376237A (en) A kind of house visiting management system and management method based on 3D identifications
CN111191507A (en) Safety early warning analysis method and system for smart community
CN111985407A (en) Safety early warning method, device, equipment and storage medium
CN115273369A (en) Intelligent household security monitoring device and monitoring method thereof
CN114898443A (en) Face data acquisition method and device
CN107131607A (en) Monitoring method, device and system based on air conditioner and air conditioner
CN207817817U (en) A kind of Identification of Images gate inhibition equipment Internet-based
CN117152871A (en) Control method, system, electronic equipment and medium for combination of lamplight and access control
CN107958525A (en) A kind of Identification of Images gate inhibition's equipment based on internet
CN112333418B (en) Method and device for determining intelligent unlocking mode, intelligent doorbell and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant