CN114253614A - Control method and control system - Google Patents

Control method and control system Download PDF

Info

Publication number
CN114253614A
CN114253614A CN202111420380.0A CN202111420380A CN114253614A CN 114253614 A CN114253614 A CN 114253614A CN 202111420380 A CN202111420380 A CN 202111420380A CN 114253614 A CN114253614 A CN 114253614A
Authority
CN
China
Prior art keywords
human
image
frame
video stream
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111420380.0A
Other languages
Chinese (zh)
Inventor
张旦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qigan Electronic Information Technology Co ltd
Original Assignee
Shanghai Qigan Electronic Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qigan Electronic Information Technology Co ltd filed Critical Shanghai Qigan Electronic Information Technology Co ltd
Priority to CN202111420380.0A priority Critical patent/CN114253614A/en
Publication of CN114253614A publication Critical patent/CN114253614A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4418Suspend and resume; Hibernate and awake
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Hardware Design (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a control method, which comprises the steps of acquiring video stream data, carrying out human shape detection on the video stream data to obtain a human shape frame, carrying out human eye detection on an image in the human shape frame to obtain a human eye area, carrying out iris recognition on the image in the human eye area to judge whether to control the electronic equipment to wake up or not, timely controlling the electronic equipment to wake up in a video stream mode, improving the starting speed of the electronic equipment, and improving the safety of the electronic equipment when the electronic equipment is started in a mode of combining the human eye detection and the iris recognition. The invention also provides a control system for realizing the control method.

Description

Control method and control system
Technical Field
The present invention relates to the field of control systems, and in particular, to a control method and a control system.
Background
The existing electronic equipment needs to be manually awakened before and after an operator arrives, so that the operator cannot use the electronic equipment immediately before arriving, and the electronic equipment is inconvenient to use by the operator.
Therefore, there is a need to provide a novel control method and control system to solve the above problems in the prior art.
Disclosure of Invention
The invention aims to provide a control method and a control system, which are convenient for automatically controlling the awakening of electronic equipment, and improve the starting speed of the electronic equipment and the safety of the electronic equipment during starting.
In order to achieve the above object, the control method of the present invention is used for controlling the wake-up of an electronic device, and includes the following steps:
s11: acquiring video stream data, and then carrying out human shape detection on the video stream data to obtain a human shape frame;
s12: carrying out human eye detection on the image in the human-shaped frame to obtain a human eye area;
s13: and carrying out iris recognition on the image in the human eye area so as to judge whether to control the electronic equipment to wake up.
The control method has the beneficial effects that: the method comprises the steps of obtaining video stream data, then carrying out human shape detection on the video stream data to obtain a human shape frame, carrying out human eye detection on an image in the human shape frame to obtain a human eye area, carrying out iris recognition on the image in the human eye area to judge whether to control the electronic equipment to wake up, timely controlling the electronic equipment to wake up in a video stream mode, improving the starting speed of the electronic equipment, and improving the safety of the electronic equipment when the electronic equipment is started in a mode of combining the human eye detection and the iris recognition.
Optionally, the obtaining video stream data and then performing human shape detection on the video stream data to obtain a human shape frame includes:
sequentially acquiring each frame of image in the video stream data, sequentially carrying out human shape detection on each frame of image to determine whether human shape exists in each frame of image, when human shape exists in the image, acquiring a human shape frame from the image, adding 1 to a frame count to obtain a new frame count, and replacing the frame count with the new frame count to obtain at least one human shape frame.
Optionally, before executing the step S2, a frame count comparing step is further included, where the frame count comparing step includes:
comparing the new frame count with a preset frame count to determine whether the new frame count is greater than the preset frame count;
and if the new frame count is judged to be larger than the preset frame count, obtaining the movement trend of the target according to the change trend of the human-shaped frame.
Optionally, before performing the step S2, a human-shaped frame comparing step is further included, where the human-shaped frame comparing step includes:
judging whether the movement trend of the target is close to the electronic equipment or not;
if the motion trend of the target is judged to be close to the electronic equipment, comparing the newly obtained human-shaped frame with a preset human-shaped frame to judge whether the newly obtained human-shaped frame is larger than the preset human-shaped frame;
and if the newly obtained human-shaped frame is judged to be larger than the preset human-shaped frame, executing the step S12. The beneficial effects are that: as the starting condition of the inner side of human eyes, useless human eyes are avoided, the human eye detection efficiency is improved, and the problem that the human eye detection cannot be started for a long time can also be avoided.
Optionally, before executing step S11, a preset frame count selecting step is further included, where the preset frame count selecting step includes:
selecting or inputting the preset frame count. The beneficial effects are that: the method is convenient to adapt to different scene requirements.
Optionally, the obtaining video stream data and then performing human shape detection on the video stream data to obtain a human shape frame includes:
acquiring each frame of image in video stream data, and then sequentially carrying out human shape detection on each frame of image according to the sequence of each frame of image in the video stream data to obtain a human shape frame.
Optionally, if human shape detection is performed on the same frame of image to obtain at least two human shape frames, no human shape frame is obtained. The beneficial effects are that: the control electronic equipment is prevented from being awakened when a plurality of persons are used, and privacy is protected conveniently.
Optionally, the performing iris recognition on the image in the human eye region to determine whether to control the electronic device to wake up includes:
acquiring a left eye area image or a right eye area image of the image in the human eye area;
sequentially carrying out image size transformation processing, normalization processing and affine transformation processing on the left eye region image or the right eye region image to obtain a human eye image;
judging whether the human eye image is matched with a preset human eye image or not;
and if the human eye image is judged to be matched with a preset human eye image, controlling the electronic equipment to wake up. The beneficial effects are that: the electronic equipment is convenient to accurately control to wake up.
The invention also provides a control system, which comprises a video stream acquisition unit, a human shape detection unit, a human eye detection unit, an iris identification unit and a wake-up unit, wherein the video stream acquisition unit is used for acquiring video stream data; the human-shaped detection unit is used for receiving the video stream data acquired by the video stream acquisition unit and then carrying out human-shaped detection on the video stream data to obtain a human-shaped frame; the human eye detection unit is used for carrying out human eye detection on the image in the human-shaped frame so as to obtain a human eye area; the iris recognition unit is used for carrying out iris recognition on the image in the human eye region so as to judge whether to control the electronic equipment to wake up or not; the awakening unit is used for controlling the electronic equipment to be awakened.
The control system has the advantages that: the video stream acquiring unit is used for acquiring video stream data; the human-shaped detection unit is used for receiving the video stream data acquired by the video stream acquisition unit and then carrying out human-shaped detection on the video stream data to obtain a human-shaped frame; the human eye detection unit is used for carrying out human eye detection on the image in the human-shaped frame so as to obtain a human eye area; the iris recognition unit is used for carrying out iris recognition on the image in the human eye region so as to judge whether to control the electronic equipment to wake up or not; the awakening unit is used for controlling the electronic equipment to be awakened, and the electronic equipment is timely controlled to be awakened in a video streaming mode, so that the starting speed of the electronic equipment is improved, and the safety of the electronic equipment during starting is improved in a mode of combining human eye detection and iris recognition.
Optionally, the control system further comprises a frame counting unit, wherein the frame counting unit is configured to add 1 to the frame count to obtain a new frame count, and then replace the frame count with the new frame count.
Optionally, the control system further includes a frame count comparing unit, where the frame count comparing unit is configured to compare the new frame count with a preset frame count to determine whether the new frame count is greater than the preset frame count, and if the new frame count is greater than the preset frame count, obtain a motion trend of the target according to a change trend of the human-shaped frame.
Optionally, the control system further includes a human-shaped frame comparing unit, where the human-shaped frame comparing unit is configured to determine whether a movement trend of the target is close to the electronic device, and if the movement trend of the target is determined to be close to the electronic device, compare the newly obtained human-shaped frame with a preset human-shaped frame to determine whether the newly obtained human-shaped frame is larger than the preset human-shaped frame.
Optionally, the iris identification unit includes a region image acquisition unit, and the region image acquisition unit is configured to acquire a left eye region image or a right eye region image of the image in the human eye region.
Optionally, the iris identification unit further comprises an image size transformation processing unit, and the image size transformation processing unit is configured to perform image size transformation processing.
Optionally, the iris identification unit further includes a normalization processing unit, and the normalization processing unit is configured to perform normalization processing.
Optionally, the iris recognition unit further includes an affine transformation processing unit configured to perform affine transformation processing.
Optionally, the iris identification unit further includes a human eye image matching unit, and the human eye image matching unit is configured to determine whether the human eye image matches a preset human eye image.
Drawings
FIG. 1 is a block diagram of a control system according to the present invention;
FIG. 2 is a flow chart of the control method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. Unless defined otherwise, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. As used herein, the word "comprising" and similar words are intended to mean that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items.
In view of the problems in the prior art, embodiments of the present invention provide a control system. Referring to fig. 1, the control system 100 includes a video stream acquiring unit 101, a human shape detecting unit 102, a human eye detecting unit 103, an iris identifying unit 104, and a waking unit 105, where the video stream acquiring unit 101 is configured to acquire video stream data; the human shape detection unit 102 is configured to receive the video stream data acquired by the video stream acquisition unit 101, and then perform human shape detection on the video stream data to obtain a human shape frame; the human eye detection unit 103 is configured to perform human eye detection on the image in the human-shaped frame to obtain a human eye region; the iris recognition unit 104 is configured to perform iris recognition on the image in the human eye region to determine whether to control the electronic device to wake up; the wake-up unit 105 is used for controlling the electronic device to wake up.
In some embodiments, the control system further comprises a frame count unit for adding 1 to a frame count to obtain a new frame count and then replacing the frame count with the new frame count.
In some embodiments, the control system further includes a frame count comparing unit, where the frame count comparing unit is configured to compare the new frame count with a preset frame count to determine whether the new frame count is greater than the preset frame count, and if the new frame count is greater than the preset frame count, obtain a motion trend of the target according to a change trend of the human-shaped frame.
In some embodiments, the control system further includes a human-shaped frame comparison unit, where the human-shaped frame comparison unit is configured to determine whether a movement trend of the target is approaching the electronic device, and if the movement trend of the target is approaching the electronic device, compare the newly obtained human-shaped frame with a preset human-shaped frame to determine whether the newly obtained human-shaped frame is larger than the preset human-shaped frame.
In some embodiments, the iris recognition unit includes a region image acquisition unit for acquiring a left eye region image or a right eye region image of the image in the human eye region.
In some embodiments, the iris identification unit further includes an image size conversion processing unit for performing image size conversion processing.
In some embodiments, the iris identification unit further comprises a normalization processing unit, and the normalization processing unit is configured to perform normalization processing.
In some embodiments, the iris recognition unit further includes an affine transformation processing unit for performing affine transformation processing.
In some embodiments, the iris identification unit further includes a human eye image matching unit, and the human eye image matching unit is configured to determine whether the human eye image matches a preset human eye image.
FIG. 2 is a flow chart of a control method in some embodiments of the invention. Referring to fig. 2, the control method is implemented by a control system for waking up an electronic device, and includes the following steps:
s11: acquiring video stream data, and then carrying out human shape detection on the video stream data to obtain a human shape frame;
s12: carrying out human eye detection on the image in the human-shaped frame to obtain a human eye area;
s13: and carrying out iris recognition on the image in the human eye area so as to judge whether to control the electronic equipment to wake up.
In some embodiments, human shape detection and human eye detection are performed by a trained neural network model, and the training method of the neural network model includes: collecting a custom data set, wherein the custom data set comprises a face picture; carrying out normalization preprocessing on the face picture in the RGB format; obtaining a custom training network model, wherein the custom training network model is compressed based on YOLOV 4; inputting the human face picture subjected to normalization preprocessing into the custom training network model for training; calculating the training loss of the face image through a loss function, carrying out back propagation on the training loss to update a training network model, and finishing training when the performance of the training network model on a verification set meets a preset threshold value; and performing network pruning on the trained network model, and training all data in the pruned network model at least ten times to obtain the trained neural network model for human shape detection or human eye detection. Human shape detection and human eye detection can be performed in other manners, and the manners of human shape detection and human eye detection are not particularly limited herein.
In some embodiments, the obtaining video stream data and then performing human-shaped detection on the video stream data to obtain a human-shaped frame includes: sequentially acquiring each frame of image in the video stream data, sequentially carrying out human shape detection on each frame of image to determine whether human shape exists in each frame of image, when human shape exists in the image, acquiring a human shape frame from the image, adding 1 to a frame count to obtain a new frame count, and replacing the frame count with the new frame count to obtain at least one human shape frame.
In some embodiments, before performing the step S2, a frame count comparing step is further included, and the frame count comparing step includes: comparing the new frame count with a preset frame count to determine whether the new frame count is greater than the preset frame count; and if the new frame count is judged to be larger than the preset frame count, obtaining the movement trend of the target according to the change trend of the human-shaped frame.
In some embodiments, before performing step S2, a human-shaped frame comparing step is further included, and the human-shaped frame comparing step includes: judging whether the movement trend of the target is close to the electronic equipment or not; if the motion trend of the target is judged to be close to the electronic equipment, comparing the newly obtained human-shaped frame with a preset human-shaped frame to judge whether the newly obtained human-shaped frame is larger than the preset human-shaped frame; and if the newly obtained human-shaped frame is judged to be larger than the preset human-shaped frame, executing the step S12.
In some embodiments, the obtaining a movement trend of the target according to the variation trend of the human-shaped frame includes: sequentially comparing the sizes of the adjacent human-shaped frames according to the acquisition sequence of the human-shaped frames; multiplying a first calculation value by a first amplification threshold value or a first reduction threshold value according to a comparison result of the sizes of the adjacent human-shaped frames to obtain a new first calculation value, and then replacing the first calculation value by the new first calculation value; and comparing the new first calculated value with a first judgment threshold value to obtain the variation trend of the human-shaped frame, and further obtaining the target motion trend. Wherein the first calculated value is greater than 0.
In some embodiments, the first zoom-in threshold is greater than 1 and less than 2, the first zoom-out threshold is less than 1 and greater than 0, and the sum of the first zoom-in threshold and the first zoom-out threshold is 2. For example, the first zoom-in threshold is 1.2, and the first zoom-out threshold is 0.8; for another example, the first zoom-in threshold is 1.3, and the first zoom-out threshold is 0.7.
In some embodiments, the comparing the new first calculation value with a first judgment threshold to obtain a variation trend of a human-shaped frame, so as to obtain a target movement trend, includes: comparing the new first calculated value with a first judgment threshold value to obtain the variation trend of the human-shaped frame; if the new first calculation value is larger than the first judgment threshold value, judging that the change trend of the human-shaped frame is larger and larger, and further obtaining that the target motion trend is close to the electronic equipment; if the new first calculation value is smaller than the first judgment threshold value, the change trend of the human-shaped frame is judged to be smaller and smaller, and then the target motion trend can be obtained to be far away from the electronic equipment.
In some embodiments, the first determination threshold is equal to the first calculated value.
In some embodiments, before performing step S11, the method further includes a preset frame count selecting step, where the preset frame count selecting step includes: selecting or inputting the preset frame count.
In some embodiments, the obtaining video stream data and then performing human-shaped detection on the video stream data to obtain a human-shaped frame includes: acquiring each frame of image in video stream data, and then sequentially carrying out human shape detection on each frame of image according to the sequence of each frame of image in the video stream data to obtain a human shape frame. And if human shape detection is carried out on the same frame of image to obtain at least two human shape frames, not obtaining any human shape frame.
In some embodiments, the performing iris recognition on the image in the human eye region to determine whether to control the electronic device to wake up includes: acquiring a left eye area image or a right eye area image of the image in the human eye area; sequentially carrying out image size transformation processing, normalization processing and affine transformation processing on the left eye region image or the right eye region image to obtain a human eye image; judging whether the human eye image is matched with a preset human eye image or not; and if the human eye image is judged to be matched with a preset human eye image, controlling the electronic equipment to wake up. And the right eye area image and the left eye area image are both RGB images.
In some embodiments, the image size transformation process includes changing the size of the right-eye region image to a target size, e.g., 52 x 52; the normalization process includes: dividing the R value in the RGB value in the right eye area image by 127.5, subtracting 1 to obtain a new R value, dividing the B value in the RGB value in the right eye area image by 127.5, subtracting 1 to obtain a new B value, dividing the G value in the RGB value in the right eye area image by 127.5, and subtracting 1 to obtain a new G value; and acquiring pupil center point coordinates, sclera right boundary point coordinates and sclera left boundary point coordinates in the right eye region image, performing affine transformation on the pupil center point coordinates, the sclera right boundary point coordinates, the sclera left boundary point coordinates, preset pupil center point coordinates, preset sclera right boundary point coordinates and preset sclera left boundary point coordinates to obtain the human eye image, and inputting the human eye image into an iris recognition network for recognition. When the target size after the image size conversion processing is 52 × 52, the preset pupil center point coordinate is (27.0347,28.1562), the preset sclera right boundary point coordinate is (40.2615, 30.6483), and the preset sclera left boundary point coordinate is (16.9534, 29.5875). Specifically, feature extraction is carried out on the human eye image through an iris recognition network to obtain a 128-dimensional vector, a cosine distance between the 128-dimensional vector and an id vector of a preset human face image is calculated, normalization processing is carried out on the 128-dimensional vector and the id vector of the preset human face image, and the modulus of the two vectors is equal to 1; according to the principle of cosine distance, the larger the value is, the smaller the included angle between two vectors is, the higher the similarity of the two vectors is, if the cosine distance is larger than the cosine distance threshold value, the human eye image is judged to be matched with the preset human eye image, otherwise, the human eye image is judged to be not matched with the preset human eye image. The iris recognition network is well known in the art and will not be described in detail herein.
Although the embodiments of the present invention have been described in detail hereinabove, it is apparent to those skilled in the art that various modifications and variations can be made to these embodiments. However, it is to be understood that such modifications and variations are within the scope and spirit of the present invention as set forth in the following claims. Moreover, the invention as described herein is capable of other embodiments and of being practiced or of being carried out in various ways.

Claims (16)

1. A control method for controlling wake-up of an electronic device, comprising the steps of:
s1: acquiring video stream data, and then carrying out human shape detection on the video stream data to obtain a human shape frame;
s2: carrying out human eye detection on the image in the human-shaped frame to obtain a human eye area;
s3: and carrying out iris recognition on the image in the human eye area so as to judge whether to control the electronic equipment to wake up.
2. The control method according to claim 1, wherein the obtaining video stream data and then performing human shape detection on the video stream data to obtain a human shape frame comprises:
sequentially acquiring each frame of image in the video stream data, sequentially carrying out human shape detection on each frame of image to determine whether human shape exists in each frame of image, when human shape exists in the image, acquiring a human shape frame from the image, adding 1 to a frame count to obtain a new frame count, and replacing the frame count with the new frame count to obtain at least one human shape frame.
3. The control method according to claim 2, wherein said step S2 is preceded by a frame count comparison step, said frame count comparison step comprising:
comparing the new frame count with a preset frame count to determine whether the new frame count is greater than the preset frame count;
and if the new frame count is judged to be larger than the preset frame count, obtaining the movement trend of the target according to the change trend of the human-shaped frame.
4. The control method according to claim 3, characterized in that before executing the step S2, the method further comprises a human-shaped frame comparison step, wherein the human-shaped frame comparison step comprises:
judging whether the movement trend of the target is close to the electronic equipment or not;
if the motion trend of the target is judged to be close to the electronic equipment, comparing the newly obtained human-shaped frame with a preset human-shaped frame to judge whether the newly obtained human-shaped frame is larger than the preset human-shaped frame;
and if the newly obtained human-shaped frame is judged to be larger than the preset human-shaped frame, executing the step S12.
5. The control method according to claim 1, wherein the obtaining video stream data and then performing human shape detection on the video stream data to obtain a human shape frame comprises:
acquiring each frame of image in video stream data, and then sequentially carrying out human shape detection on each frame of image according to the sequence of each frame of image in the video stream data to obtain a human shape frame.
6. The control method according to claim 5, wherein if human shape detection is performed on the image of the same frame to obtain at least two human shape frames, no human shape frame is obtained.
7. The control method according to claim 1, wherein the performing iris recognition on the image in the human eye region to determine whether to control the electronic device to wake up comprises:
acquiring a left eye area image or a right eye area image of the image in the human eye area;
sequentially carrying out image size transformation processing, normalization processing and affine transformation processing on the left eye region image or the right eye region image to obtain a human eye image;
judging whether the human eye image is matched with a preset human eye image or not;
and if the human eye image is judged to be matched with a preset human eye image, controlling the electronic equipment to wake up.
8. A control system for implementing the control method according to any one of claims 1 to 7, the control system comprising a video stream acquisition unit, a human shape detection unit, a human eye detection unit, an iris recognition unit and a wake-up unit, wherein the video stream acquisition unit is configured to acquire video stream data; the human-shaped detection unit is used for receiving the video stream data acquired by the video stream acquisition unit and then carrying out human-shaped detection on the video stream data to obtain a human-shaped frame; the human eye detection unit is used for carrying out human eye detection on the image in the human-shaped frame so as to obtain a human eye area; the iris recognition unit is used for carrying out iris recognition on the image in the human eye region so as to judge whether to control the electronic equipment to wake up or not; the awakening unit is used for controlling the electronic equipment to be awakened.
9. The control system of claim 8, further comprising a frame count unit configured to increment a frame count by 1 to obtain a new frame count and then replace the frame count with the new frame count.
10. The control system according to claim 9, further comprising a frame count comparing unit, wherein the frame count comparing unit is configured to compare the new frame count with a preset frame count to determine whether the new frame count is greater than the preset frame count, and if the new frame count is greater than the preset frame count, obtain a motion trend of the target according to a change trend of the human-shaped frame.
11. The control system according to claim 10, further comprising a human-shaped frame comparison unit, wherein the human-shaped frame comparison unit is configured to determine whether a movement trend of the target is approaching the electronic device, and if the movement trend of the target is approaching the electronic device, compare the newly obtained human-shaped frame with a preset human-shaped frame to determine whether the newly obtained human-shaped frame is larger than the preset human-shaped frame.
12. The control system according to claim 11, wherein the iris recognition unit includes a regional image acquisition unit for acquiring a left-eye region image or a right-eye region image of the image in the human eye region.
13. The control system according to claim 12, wherein the iris recognition unit further includes an image size conversion processing unit for performing image size conversion processing.
14. The control system according to claim 13, wherein the iris recognition unit further includes a normalization processing unit for performing normalization processing.
15. The control system according to claim 14, wherein the iris recognition unit further comprises an affine transformation processing unit configured to perform affine transformation processing.
16. The control system according to claim 15, wherein the iris recognition unit further comprises a human eye image matching unit for determining whether the human eye image matches a preset human eye image.
CN202111420380.0A 2021-11-25 2021-11-25 Control method and control system Pending CN114253614A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111420380.0A CN114253614A (en) 2021-11-25 2021-11-25 Control method and control system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111420380.0A CN114253614A (en) 2021-11-25 2021-11-25 Control method and control system

Publications (1)

Publication Number Publication Date
CN114253614A true CN114253614A (en) 2022-03-29

Family

ID=80791207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111420380.0A Pending CN114253614A (en) 2021-11-25 2021-11-25 Control method and control system

Country Status (1)

Country Link
CN (1) CN114253614A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339603A (en) * 2008-08-07 2009-01-07 电子科技大学中山学院 Method for selecting qualified iris image from video frequency stream
CN104850842A (en) * 2015-05-21 2015-08-19 北京中科虹霸科技有限公司 Mobile terminal iris identification man-machine interaction method
US9830708B1 (en) * 2015-10-15 2017-11-28 Snap Inc. Image segmentation of a video stream
CN107992866A (en) * 2017-11-15 2018-05-04 上海聚虹光电科技有限公司 Biopsy method based on video flowing eye reflective spot
US20180373058A1 (en) * 2017-06-26 2018-12-27 International Business Machines Corporation Dynamic contextual video capture
CN110427811A (en) * 2019-06-21 2019-11-08 武汉倍特威视系统有限公司 Skeleton based on video stream data is fought recognition methods
CN110781778A (en) * 2019-10-11 2020-02-11 珠海格力电器股份有限公司 Access control method and device, storage medium and home system
CN112790758A (en) * 2019-11-13 2021-05-14 创新工场(北京)企业管理股份有限公司 Human motion measuring method and system based on computer vision and electronic equipment
CN113554693A (en) * 2021-09-18 2021-10-26 深圳市安软慧视科技有限公司 Correlation and judgment method, device and storage medium for edge deployment image
CN113689585A (en) * 2021-10-25 2021-11-23 深圳市安软慧视科技有限公司 Non-inductive attendance card punching method, system and related equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339603A (en) * 2008-08-07 2009-01-07 电子科技大学中山学院 Method for selecting qualified iris image from video frequency stream
CN104850842A (en) * 2015-05-21 2015-08-19 北京中科虹霸科技有限公司 Mobile terminal iris identification man-machine interaction method
US9830708B1 (en) * 2015-10-15 2017-11-28 Snap Inc. Image segmentation of a video stream
US20180373058A1 (en) * 2017-06-26 2018-12-27 International Business Machines Corporation Dynamic contextual video capture
CN107992866A (en) * 2017-11-15 2018-05-04 上海聚虹光电科技有限公司 Biopsy method based on video flowing eye reflective spot
CN110427811A (en) * 2019-06-21 2019-11-08 武汉倍特威视系统有限公司 Skeleton based on video stream data is fought recognition methods
CN110781778A (en) * 2019-10-11 2020-02-11 珠海格力电器股份有限公司 Access control method and device, storage medium and home system
CN112790758A (en) * 2019-11-13 2021-05-14 创新工场(北京)企业管理股份有限公司 Human motion measuring method and system based on computer vision and electronic equipment
CN113554693A (en) * 2021-09-18 2021-10-26 深圳市安软慧视科技有限公司 Correlation and judgment method, device and storage medium for edge deployment image
CN113689585A (en) * 2021-10-25 2021-11-23 深圳市安软慧视科技有限公司 Non-inductive attendance card punching method, system and related equipment

Similar Documents

Publication Publication Date Title
CN105893920B (en) Face living body detection method and device
Kukharev et al. Visitor identification-elaborating real time face recognition system
CN104794465B (en) A kind of biopsy method based on posture information
CN109190522B (en) Living body detection method based on infrared camera
CN107292300A (en) A kind of face recognition device and method
CN108549853B (en) Image processing method, mobile terminal and computer readable storage medium
CN111367415B (en) Equipment control method and device, computer equipment and medium
CN107844742A (en) Facial image glasses minimizing technology, device and storage medium
CN108921010B (en) Pupil detection method and detection device
CN112150692A (en) Access control method and system based on artificial intelligence
CN110175553B (en) Method and device for establishing feature library based on gait recognition and face recognition
CN111046769A (en) Queuing time detection method, device and system
CN109697417A (en) A kind of production management system for pitch-controlled system cabinet
Zhang et al. A novel efficient method for abnormal face detection in ATM
CN114253614A (en) Control method and control system
CN106650363A (en) Identity recognition method and system
CN106447840A (en) Multifunctional intelligent entrance guard system
CN110135362A (en) A kind of fast face recognition method based under infrared camera
CN113642546B (en) Multi-face tracking method and system
CN109145758A (en) A kind of recognizer of the face based on video monitoring
CN111062294B (en) Passenger flow queuing time detection method, device and system
CN114253611A (en) Control method and control system
CN112257831A (en) Positioning system based on RFID and face recognition technology
CN112070956A (en) Identity recognition system based on face image
CN113821109B (en) Control method and control system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination