CN112019754A - Face recognition monitoring method and system - Google Patents

Face recognition monitoring method and system Download PDF

Info

Publication number
CN112019754A
CN112019754A CN202010976097.5A CN202010976097A CN112019754A CN 112019754 A CN112019754 A CN 112019754A CN 202010976097 A CN202010976097 A CN 202010976097A CN 112019754 A CN112019754 A CN 112019754A
Authority
CN
China
Prior art keywords
face
facial feature
monitoring
feature parameters
matched
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010976097.5A
Other languages
Chinese (zh)
Inventor
邓洋江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Ruizhu Intelligent Technology Co.,Ltd.
Original Assignee
Ruizhu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ruizhu Technology Co ltd filed Critical Ruizhu Technology Co ltd
Priority to CN202010976097.5A priority Critical patent/CN112019754A/en
Publication of CN112019754A publication Critical patent/CN112019754A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a monitoring method, which comprises the following steps: acquiring a monitoring image shot by a camera; judging whether the monitoring image comprises a face image, if so, extracting a first face characteristic parameter of the face image; and if the first facial feature parameters are matched with the pre-stored facial feature parameters, the camera in the preset monitoring area is linked to close the following monitoring. The invention also discloses a monitoring system and a readable storage medium. The invention aims to realize that the camera does not carry out follow-up shooting on a legal user, protect the privacy of the user and improve the intelligence of area monitoring.

Description

Face recognition monitoring method and system
Technical Field
The invention relates to the technical field of monitoring, in particular to a monitoring method and a monitoring system based on face recognition.
Background
With the development of science and technology, the safety awareness of people is continuously improved, and the application of the camera in the areas such as communities, families and the like is more and more extensive. Wherein, in order to realize regional safety monitoring, present camera when monitoring that arbitrary personnel get into the region, can follow the shooting to the personnel that get into in real time, just can remove the control after monitoring that personnel leave the region. However, such a method may result in that a legitimate user such as a business owner still monitors the region while moving in the region, and there is a risk of privacy violation and the monitoring scheme is not intelligent.
Disclosure of Invention
The invention mainly aims to provide a monitoring method, which aims to realize that a camera does not carry out follow-up shooting on a legal user, protect the privacy of the user and improve the intelligence of regional monitoring.
In order to achieve the above object, the present invention provides a monitoring method, including the steps of:
acquiring a monitoring image shot by a camera;
judging whether the monitoring image comprises a face image, if so, extracting a first face characteristic parameter of the face image;
and if the first facial feature parameters are matched with the pre-stored facial feature parameters, the camera in the preset monitoring area is linked to close the following monitoring.
Optionally, before the step of monitoring is closed to the camera in the linkage preset monitoring area and follows, still include:
judging whether the first facial feature parameters are matched with the prestored facial feature parameters;
if the first facial feature parameters are matched with the prestored facial feature parameters, executing the step of closing the camera in the linkage preset monitoring area to follow the monitoring;
and if the first facial feature parameters are not matched with the prestored facial feature parameters, controlling the camera to follow and monitor the personnel corresponding to the first facial feature parameters, and returning to the step of acquiring the monitoring image shot by the camera.
Optionally, it is equipped with at least two to predetermine the surveillance area the camera, it is different the camera is used for shooing different angles the surveillance image, prestore facial feature parameter including prestore face-facing facial feature parameter, judge first facial feature parameter with whether the step of prestore facial feature parameter matching includes:
judging whether the face angle corresponding to the first face characteristic parameter comprises a face front face or not;
if the face angle corresponding to the first face characteristic parameter comprises a face front face, judging whether the first face characteristic parameter corresponding to the face front face is matched with the prestored face characteristic parameter;
if the first facial feature parameters corresponding to the front face of the human face are matched with the prestored front face feature parameters, determining that the first facial feature parameters are matched with the prestored face feature parameters;
and if the first facial feature parameters corresponding to the front face of the human face are not matched with the pre-stored front face feature parameters, determining that the first facial feature parameters are not matched with the pre-stored facial feature parameters.
Optionally, after the step of determining whether the face angle corresponding to the first face feature parameter includes a face front face, the method further includes:
if the face angle corresponding to the first face characteristic parameter does not comprise the face front face, generating a second face characteristic parameter corresponding to the face front face according to the first face characteristic parameter;
if the second facial feature parameters are matched with the pre-stored front face feature parameters, determining that the first facial feature parameters are matched with the pre-stored facial feature parameters;
and if the second facial feature parameter is not matched with the pre-stored front face feature parameter, determining that the first facial feature parameter is not matched with the pre-stored facial feature parameter.
Optionally, the pre-storing facial feature parameters further includes pre-storing side face facial feature parameters, and after the step of determining whether the face angle corresponding to the first facial feature parameter includes the face front face, the method further includes:
if the face angle corresponding to the first face characteristic parameter does not comprise the face front, judging whether the first face characteristic parameter is matched with the prestored side face characteristic parameter;
if the first facial feature parameters are matched with the prestored side face feature parameters, determining that the first facial feature parameters are matched with the prestored side face feature parameters;
and if the first facial feature parameters are not matched with the pre-stored side face feature parameters, determining that the first facial feature parameters are not matched with the pre-stored facial feature parameters.
Optionally, it is equipped with two at least to predetermine the surveillance area the camera, it is different the camera is used for shooing different angles the surveillance image, prestore facial feature parameter including prestore side face facial feature parameter, judge first facial feature parameter with whether the step of prestore facial feature parameter matching includes:
if the face angle corresponding to the first face characteristic parameter comprises a face side, judging whether the first face characteristic parameter corresponding to the face side is matched with the prestored side face characteristic parameter;
if the first facial feature parameters corresponding to the side faces of the human faces are matched with the prestored side face feature parameters, determining that the first facial feature parameters are matched with the prestored facial feature parameters;
and if the first facial feature parameters corresponding to the side faces of the human faces are not matched with the pre-stored side face feature parameters, determining that the first facial feature parameters are not matched with the pre-stored facial feature parameters.
Optionally, before the step of monitoring is closed to the camera in the linkage preset monitoring area and follows, still include:
if the first facial feature parameters are matched with the prestored facial feature parameters, acquiring user identity information corresponding to the prestored facial feature parameters matched with the first facial feature parameters;
checking the passing authority of the personnel corresponding to the first surface characteristic parameter in the preset monitoring area according to the user identity information;
if the verification is passed, executing the step of closing the camera in the linkage preset monitoring area to follow the monitoring;
and if the verification fails, controlling the camera to follow and monitor the personnel corresponding to the first surface characteristic parameter.
Optionally, the step of monitoring is closed to the camera in the linkage preset monitoring area and follows includes:
if the preset monitoring area has a relevant monitoring area, checking the passing permission of the personnel corresponding to the first surface characteristic parameter in the relevant monitoring area according to the user identity information;
and if the verification is passed, linking the preset monitoring area and the camera in the associated monitoring area to close the follow-up monitoring.
Optionally, after the step of monitoring is closed to the camera in the linkage preset monitoring area and follows, still include:
controlling the camera in the preset monitoring area to be switched to a set shooting state to operate;
if no human body exists in the preset monitoring area, returning to the step of acquiring the monitoring image shot by the camera;
the preset shooting state comprises a first shooting state or a second shooting state, the first shooting state is a state that the image shooting direction of the camera faces a preset sub-region in the preset monitoring region, and the second shooting state is a state that the image shooting direction of the camera dynamically changes according to a preset rule.
In addition, in order to achieve the above object, the present application also provides a monitoring system, including:
the system comprises cameras, a camera module and a control module, wherein the number of the cameras is at least two, and different cameras are used for shooting monitoring images at different angles;
a control device, the control device comprising a memory, a processor and a monitoring program stored on the memory and operable on the processor, the processor being connected to the camera, the monitoring program, when executed by the processor, implementing the steps of the monitoring method as claimed in any one of the above.
The invention provides a monitoring method, which comprises the steps of acquiring a monitoring image shot by a camera, extracting a first face characteristic parameter in the face image when the monitoring image comprises the face image, linking and closing the following monitoring of the camera in a preset monitoring area if the first face characteristic parameter is matched with a prestored face characteristic parameter, and only needing to prestore the face characteristic parameter by legal users such as owners, when the legal users enter the monitoring area, the camera collects the face image and cannot be subjected to the following monitoring of the camera, so that the camera does not carry out the following shooting on the legal users, the privacy of the users in the legal entering area is protected, meanwhile, the following monitoring can be suitable for being opened or closed by the identity of the users in the entering area, and the intelligent degree of the area monitoring is effectively improved.
Drawings
FIG. 1 is a schematic diagram of a monitoring system according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of an embodiment of a monitoring method according to the present invention;
FIG. 3 is a schematic flow chart of another embodiment of the monitoring method of the present invention;
FIG. 4 is a detailed flowchart of step S00 in FIG. 3;
FIG. 5 is a schematic flow chart illustrating a detailed process of step S00 in FIG. 3 according to another embodiment of the monitoring method of the present invention;
FIG. 6 is a schematic flow chart of a monitoring method according to another embodiment of the present invention;
fig. 7 is a flowchart illustrating a monitoring method according to yet another embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The main solution of the embodiment of the invention is as follows: acquiring a monitoring image shot by a camera; judging whether the monitoring image comprises a face image, if so, extracting a first face characteristic parameter of the face image; and if the first facial feature parameters are matched with the pre-stored facial feature parameters, the camera in the preset monitoring area is linked to close the following monitoring.
Because among the prior art, the camera can follow the shooting to the personnel that get into in real time when monitoring that arbitrary personnel get into the region, just can remove the control after monitoring that personnel leave the region. However, in such a manner, when a legal user such as an owner moves in a private area, the camera can shoot the legal user along with the user, which causes oppression to the user and seriously affects the user experience.
The invention provides the solution, and aims to realize that the camera does not carry out follow-up shooting on a legal user, protect the privacy of the user and improve the intelligence of area monitoring.
The embodiment of the invention provides a monitoring system which is applied to monitoring an installation area of the monitoring system.
Specifically, referring to fig. 1, the monitoring system includes a camera 1 and a control device 2 connected to the camera 1. The camera 1 is specifically installed in the monitoring area, and the camera 1 specifically includes the camera, and the camera is used for shooting the surveillance image in monitoring area. The control device 2 is specifically used for image recognition (such as recognition of a face image in a monitored image, etc.), image recognition result analysis (such as extraction of face feature parameters in a face image, comparison, etc.), and monitoring control (such as start or stop of follow-up monitoring, etc.) of the camera 1.
Further, the number of the cameras 1 in the monitoring system is at least two. At least two cameras 1 can be installed at different positions and angles in the monitored area to capture monitored images at different angles in the monitored area. Specifically, when the monitoring system needs to monitor different monitoring areas, at least two cameras 1 can be distributed in different monitoring areas, and the number of the cameras 1 arranged in each monitoring area is at least two, so that monitoring images at different angles in each monitoring area can be shot. Specifically, as shown in fig. 1, when the monitoring area includes a monitoring area (i) and a monitoring area (ii), a camera A, B is arranged in the monitoring area (i), a camera C, D is arranged in the monitoring area (ii), and the camera A, B, C, D is in communication connection with the control device 2.
Specifically, in the embodiment of the present invention, referring to fig. 1, the control device 2 includes: a processor 2001 (e.g., CPU), memory 2002, and the like. The memory 2002 may be a high-speed RAM memory or a non-volatile memory (e.g., a disk memory). The memory 2002 may alternatively be a storage device separate from the processor 2001 described previously. The camera 1 and the memory 2002 described above are connected to the processor 2001.
As shown in fig. 1, a monitoring program may be included in the memory 2002 as a readable storage medium. In the apparatus shown in fig. 1, the processor 2001 may be configured to call the monitoring program stored in the memory 2002 and execute the relevant steps of the monitoring method in the following embodiments.
Those skilled in the art will appreciate that the configuration of the device shown in fig. 1 is not intended to be limiting of the device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. In an embodiment, the related module of the image recognition function and the related module of the image recognition result analysis in the control device 2 may be embedded in the camera and connected to the camera, and the related module of the camera control may be disposed independently of the camera and connected to the related module of the image recognition result analysis. In another embodiment, the correlation module of the image recognition function, the image recognition result analysis correlation module and the camera control correlation module can be integrally installed in the same device.
Based on the monitoring system, the embodiment of the invention also provides a monitoring method, which is used for controlling the monitoring of the area set by the monitoring system.
Referring to fig. 2, an embodiment of a monitoring method of the present application is provided. In this embodiment, the monitoring method includes:
step S10, acquiring a monitoring image shot by a camera;
the camera in the monitoring area collects the image of the monitoring area in real time. And acquiring data acquired by the camera to obtain the monitoring image.
When the number of the cameras in the monitoring area is one, acquiring data acquired by the equipment as a monitoring image; when the number of the cameras in the monitoring area is more than one, the data collected by each camera is acquired as the monitoring image.
Step S20, determining whether the monitoring image includes a face image;
if yes, executing step S30 and step S40; if not, the process may return to step S10, or the camera may be controlled to follow and monitor the person corresponding to the human body image when the human body image exists in the monitored image, and the process may return to step S10.
Carrying out human body identification on the obtained monitoring image, further determining the region of the head image in the human body image when the monitoring image is identified to contain the human body image, carrying out face identification on the head image, and judging that the face image exists in the monitoring image when the head image has the feature image of the face; when the head image does not have the feature image of the face, it is determined that the face image does not exist in the monitor image. When a face image appears in the monitored image, the fact that a person enters a monitored area is indicated; when the face image does not appear in the monitored image, the fact that no person enters the monitored area in the monitored area is indicated.
Step S30, extracting a first face feature parameter of the face image;
the first facial feature parameters specifically refer to characterization parameters that characterize facial features of a person appearing in the currently monitored image. Specifically, in this embodiment, the first facial feature parameter is a feature code obtained by processing an image by using a preset facial feature extraction algorithm. The specific form of the feature code is a character string.
And when the face image exists in the monitored image, processing the face image by adopting a face feature extraction algorithm, and taking the result obtained by processing as a first face feature parameter.
And step S40, if the first facial feature parameters are matched with the pre-stored facial feature parameters, closing the following monitoring of the cameras in the preset monitoring area in a linkage mode.
The pre-stored facial feature parameters are specifically facial feature parameters (which may specifically include front facial feature parameters and/or side facial feature parameters) of legitimate users (such as owners of property in the monitored area, etc.) allowed to enter the area, which are collected and stored in advance. Specifically, before step S10, a face image of a valid user may be acquired, feature extraction may be performed on the face image in the same manner as the face feature parameters, and the extracted features may be stored to form pre-stored face feature parameters. Specifically, when the number of the monitored areas is more than one, the monitored area allowed to enter by a legal user may be determined as a target area, and the pre-stored facial feature parameters may be stored in association with the target area, where the number of the target areas may be one or more. The corresponding relation between the monitoring area and the corresponding legal user can obtain the user setting parameter to directly generate; in addition, after the corresponding relationship also obtains the area request information and the pre-stored facial feature parameters input by the user, when the server allows the feedback result based on the area request information, the user sending the area request information is determined as a legal user of the monitoring area corresponding to the area request information. In particular, the legitimate users allowed to enter different monitored areas may be the same or different. For example, when the monitored area includes A, B, C, the monitored area A, B allows the user 1 to enter, and the monitored area B, C allows the user 2 to enter, the pre-stored facial feature parameters of the user 1 are stored in association with the area identification information corresponding to the monitored area A, B, and the pre-stored facial feature parameters of the user 2 are stored in association with the area identification information corresponding to the monitored area B, C. Based on this, when the number of the monitoring areas is more than one and different cameras are distributed in different monitoring areas, the monitoring areas where the cameras collecting the facial images are located can be respectively determined, corresponding pre-stored facial feature parameters are obtained based on the area identification information of the determined monitoring areas, and the obtained pre-stored facial feature parameters are respectively compared with the corresponding current first facial feature parameters of the facial images.
Comparing the first facial feature parameters extracted from the current monitoring image with prestored facial feature parameters, and if the similarity between the first facial feature parameters and the prestored facial feature parameters is higher than a set threshold value, indicating that the current first facial feature parameters are matched with the prestored facial feature parameters; if the similarity between the first facial feature parameter and the pre-stored facial feature parameter does not reach the set threshold, it indicates that the current first facial feature parameter does not match the pre-stored facial feature parameter.
The preset monitoring area refers to a preset monitoring area bound with a camera shooting the first face characteristic parameters. The preset monitoring area may include a monitoring area where a camera that captures the first facial characteristic parameter is located, and may also include other monitoring areas other than the camera that captures the first facial characteristic parameter.
The following monitoring of the human body specifically means that the shooting direction of the camera moves along with the human body in a preset monitoring area and continuously shoots the human body image.
The linkage is preset in the monitoring area, the closing of the cameras and the following monitoring specifically mean that when the number of the cameras in the monitoring area is more than one, when one of the cameras collects the first facial characteristic parameters matched with the prestored facial characteristic parameters, all the cameras in the monitoring area are preset, and the following monitoring of the personnel corresponding to the first facial characteristic parameters is closed.
The closing following monitoring can be performed for the person corresponding to the first facial characteristic parameter or for all the persons in the monitored area. That is, turning off follow-up monitoring may refer to not performing follow-up monitoring on the person corresponding to the first face characteristic parameter, but maintaining follow-up monitoring on other persons than the person corresponding to the first face characteristic parameter. In addition, closing the follow-up monitoring may also refer to closing the follow-up monitoring for all people in a preset monitoring area, that is, when a legal user and an illegal user enter the monitoring area together, the camera may not perform the follow-up monitoring.
It should be noted that, after the follow-up monitoring of the camera is turned off, the process returns to step S10, and when the new face appears, it is determined again whether the follow-up monitoring needs to be turned on.
For example, one monitoring area is defined as a monitoring area r, a camera A, B is arranged in the monitoring area r, when a monitored image acquired by a or B matches with a preset facial feature parameter, the following monitoring of a human body matching with the preset facial feature parameter by the two cameras a and B is stopped simultaneously, and when the monitored image acquired by a or B does not match with the preset facial feature parameter, at least one of a and B performs following monitoring on the monitored human body not matching with the preset facial feature parameter. When the monitored area comprises a first monitored area and a second monitored area, a camera A, B is arranged in the first monitored area, a camera C, D is arranged in the second monitored area, when the monitored image acquired by the first monitored area or the second monitored area is matched with the preset facial characteristic parameters, the following monitoring of the human body matched with the preset facial characteristic parameters by the two cameras A and B is stopped, meanwhile, C, D in the second monitored area can not be controlled, and the following monitoring of the human body matched with the preset facial characteristic parameters by the two cameras C and D can also be stopped. When the monitoring image collected by the camera C or the camera D is matched with the preset facial feature parameters, the following monitoring of the human body matched with the preset facial feature parameters by the camera C and the camera D is stopped, meanwhile, A, B in the monitoring area I can not be controlled, and the following monitoring of the human body matched with the preset facial feature parameters by the camera A and the camera B can also be stopped. Here, through above-mentioned mode, a camera gathers prestore facial feature parameter, and all cameras in its place region, even other regions can not all carry out the follow control to realize the sharing of user's legal identity recognition result in the space, simplify the independent identification process of different equipment, when guaranteeing that some equipment in the space also can in time stop the follow control because the angle problem fails to gather user's face because the angle problem, the recognition result of the camera that also can gather user's face based on other, further improve user experience.
The monitoring method provided by the embodiment of the invention comprises the steps of acquiring a monitoring image shot by a camera, extracting a first face characteristic parameter in the face image when the monitoring image comprises the face image, linking and presetting the camera in a monitoring area to close following monitoring if the first face characteristic parameter is matched with a prestored face characteristic parameter, and only needing to prestore the face characteristic parameter by legal users such as owners, when the legal users enter the monitoring area, the camera collects the face image and cannot be subjected to the following monitoring of the camera, so that the camera does not carry out the following shooting on the legal users, the privacy of the users in the legal entering area is protected, meanwhile, the following monitoring can be suitable for the identity of the users in the entering area to be opened or closed, and the intelligent degree of the area monitoring is effectively improved.
Further, based on the above embodiments, further based on any of the above embodiments, a further embodiment of the monitoring method of the present application is provided. Referring to fig. 3, the step of linking the closing of the camera in the preset monitoring area and following the monitoring in step S40 is defined as step S40a, and after step S30, the method may further include:
step S00, judging whether the first facial feature parameters are matched with the pre-stored facial feature parameters;
if so, step S40a can be performed; if not, after step S50 is executed, the process returns to step S10.
And step S50, controlling the camera to follow and monitor the personnel corresponding to the first face characteristic parameter.
Specifically, the shooting direction of the camera is controlled to move along with the person corresponding to the first face characteristic parameter and continuously shoot the image of the person.
In this embodiment, follow the control to the personnel that do not match with the facial feature parameter of prestoring, close when shooting this personnel and the facial feature parameter that prestores facial feature parameter matching and follow the control to can in time close when guaranteeing regional safety and follow the control when can be based on face identification legal user's identity, protect user's privacy and improve the intelligent degree of regional control.
Further, in this embodiment, the preset monitoring area is provided with at least two cameras, different cameras are used for shooting the monitoring images at different angles, the pre-stored facial feature parameters include pre-stored front facial feature parameters, and based on this, referring to fig. 4, step S00 includes:
step S01, judging whether the face angle corresponding to the first face characteristic parameter comprises the face front;
if the face angle corresponding to the first face feature parameter includes a face front face, performing step S02; if the face angle corresponding to the first face feature parameter does not include the face front face, step S03 is executed.
And if the face angle corresponding to the first face characteristic parameter does not comprise the face front, the face angle corresponding to the first face characteristic parameter is the face side.
The face angle corresponding to the first face characteristic parameter includes the following situations that the face front specifically includes: the number of the first face characteristic parameters is one, the corresponding face angles of the first face characteristic parameters are the face front faces, the number of the first face characteristic parameters is more than one, the corresponding face angles of the first face characteristic parameters are the face front faces, and the number of the first face characteristic parameters is more than one, and the corresponding face angles of the first face characteristic parameters comprise the face front faces and the face side faces.
In this embodiment, the process of extracting facial feature parameters from a facial image specifically includes: and calculating facial feature parameters corresponding to the facial image by adopting a facial feature extraction algorithm corresponding to the front face. Based on this, the process of judging whether the face angle corresponding to the first face characteristic parameter includes the face front face is as follows: judging whether the first face characteristic parameters accord with a set rule corresponding to an output result of the face characteristic extraction algorithm; when the first facial feature parameters accord with the set rules, judging that the first facial feature parameters are facial feature parameters of the front face of the human face; and when the first face characteristic parameters do not accord with the set rules, judging that the first face characteristic parameters are face characteristic parameters of the side face of the face. The set rule specifically refers to a specific rule (such as a length of a character string, a numerical range in a specific byte of the character string, and the like) that is satisfied by a calculation result obtained when a face feature extraction algorithm corresponding to the front of the face processes a face image of the front face.
Step S02, judging whether the first face characteristic parameters corresponding to the front face of the human face are matched with the pre-stored front face characteristic parameters;
if so, go to step S05, and if not, go to step S06.
When more than one first face characteristic parameter exists, if some first face characteristic parameters are face characteristic parameters of the front face of the human face, and other first face characteristic parameters are face characteristic parameters of the side face of the human face, matching the face characteristic parameters belonging to the front face in the first face characteristic parameters with prestored front face characteristic parameters, and not processing the face characteristic parameters belonging to the side face of the human face in the first face characteristic parameters. Matching the facial characteristic parameters of the front face of the human face in the currently extracted first facial characteristic parameters with prestored front face facial characteristic parameters, and controlling a camera in a preset monitoring area to stop following monitoring; when the facial characteristic parameters of the front face of the human face in the extracted first facial characteristic parameters are not matched with the pre-stored front face facial characteristic parameters, the camera in the preset monitoring area can be controlled to follow and monitor the personnel corresponding to the first facial characteristic parameters.
Step S03, generating a second face characteristic parameter corresponding to the front face of the human face according to the first face characteristic parameter;
and converting the first face characteristic parameters of the non-face front face (namely the first face characteristic parameters corresponding to the face side faces) into second face characteristic parameters corresponding to the face front face according to a preset conversion rule that the face side faces correspond to the face front face.
Specifically, the preset conversion rule may be a default setting, or may be obtained by analyzing a correspondence between the front face feature parameter and the side face feature parameter, which are pre-entered by a legitimate user. Based on this, in an implementation manner, if the preset conversion rule is the default rule, the first face feature parameter of the side face of the human face can be directly converted into the second face feature parameter of the front face of the human face based on the default rule. In addition, in another implementation manner, when the pre-stored facial feature parameters may include pre-stored front facial feature parameters and pre-stored side facial feature parameters, the first facial feature parameters may be matched with the pre-stored side facial feature parameters, the preset conversion rules associated with the matched pre-stored side facial feature parameters are obtained, and the current first facial feature parameters are converted into the second facial feature parameters based on the obtained preset conversion rules.
Step S04, judging whether the second facial feature parameters are matched with the pre-stored front facial feature parameters;
if yes, go to step S05; if not, go to step S06.
Step S05, determining that the first facial feature parameters are matched with the prestored facial feature parameters;
in step S06, it is determined that the first facial feature parameters do not match the pre-stored facial feature parameters.
Specifically, step S40a is performed after step S05; step S50 is performed after step S06.
In addition, in another implementation manner of this embodiment, if the face angle corresponding to the first face characteristic parameter does not include the face front, in addition to the above steps S03 to S04, step S06 may also be executed to enable the camera to follow and monitor the person corresponding to the first face characteristic parameter.
In addition, in another implementation manner of this embodiment, if the pre-stored facial feature parameters include pre-stored front facial feature parameters, and pre-stored side facial feature parameters, based on which, if the face angle corresponding to the first facial feature parameter does not include the front face of the face, step S05 may be executed when the first facial feature parameters match with the pre-stored side facial feature parameters, in addition to step S03 to step S04; step S06 is performed when the first facial feature parameters do not match the pre-stored side face facial feature parameters.
In this embodiment, when confirming that the positive first face feature parameter of face is shot to the camera, just further close when the positive first face feature parameter of face matches with the face feature parameter of prestoring face, because the face of face contains abundanter characteristic for the side face, thereby be favorable to improving and close the accuracy of following the control based on face identification. In addition, in an implementation mode, when the first face characteristic parameters including the front face of the face are not acquired, the human body is monitored in a following mode, and the behavior of the user in a monitoring area can be monitored in real time when the identity of the user is not clear, so that the safety of the monitoring area is guaranteed; in another implementation mode, when the first face characteristic parameters containing the face front are not acquired, the first face characteristic parameters containing the face front are converted into the second face characteristic parameters containing the face front and then are matched with the face characteristic parameters of the pre-stored face, or the first face characteristic parameters containing the face front are directly matched with the facial characteristic parameters of the pre-stored side face, and the following monitoring is closed when the first face characteristic parameters are matched, so that the fact that the front of personnel entering a monitoring area is not acquired is guaranteed, the legality of the identity of the personnel can be effectively recognized based on the side face of the personnel, the legality recognition efficiency of the identity of the personnel in the monitoring area can be improved, the timeliness of the following monitoring of legal personnel is guaranteed to be favorably, and the privacy of legal users in the area is further improved.
Further, based on the above embodiment, further based on the above embodiment, a further embodiment of the monitoring method of the present application is provided. In this embodiment, the preset monitoring area is provided with at least two cameras, different cameras are used for shooting the monitoring images at different angles, the pre-stored facial feature parameters include pre-stored side face facial feature parameters, with reference to fig. 5, step S00 includes:
step S001, judging whether the face angle corresponding to the first face characteristic parameter comprises a face side;
if yes, executing step S002; if not, step S004 can be executed, or step S10 can be executed again.
The face angle corresponding to the first face characteristic parameter and the face side specifically include the following conditions: the number of the first face characteristic parameters is one, the corresponding face angle is the face side, the number of the first face characteristic parameters is more than one, the corresponding face angle is the face side, and the number of the first face characteristic parameters is more than one, and the corresponding face angle comprises the face front and the face side.
In this embodiment, the process of extracting facial feature parameters from a facial image specifically includes: and calculating facial feature parameters corresponding to the facial image by adopting a facial feature extraction algorithm corresponding to the side face. Based on this, the process of judging whether the face angle corresponding to the first face characteristic parameter includes the face side is as follows: judging whether the first face characteristic parameters accord with a set rule corresponding to an output result of the face characteristic extraction algorithm; when the first face feature parameters accord with the set rules, judging that the first face feature parameters are face feature parameters of the side face of the face; and when the first facial feature parameters do not accord with the set rules, judging that the first facial feature parameters are facial feature parameters on the front face of the human face. The set rule specifically refers to a specific rule (such as the length of a character string, the numerical range in a specific byte of the character string, and the like) that is satisfied by a calculation result obtained when a face feature extraction algorithm corresponding to the side face of the face processes a face image of the face on the side face.
Step S002, judging whether the first face characteristic parameters corresponding to the side faces of the human faces are matched with the prestored side face characteristic parameters or not;
if yes, executing step S003; if not, go to step S004.
Step S003, determining that the first facial feature parameters are matched with the prestored facial feature parameters;
step S004, determining that the first facial feature parameters do not match the pre-stored facial feature parameters.
Specifically, step S40a is executed after step S003; step S004 is followed by step S50.
When more than one first face characteristic parameter exists, if some first face characteristic parameters are face characteristic parameters of the side face of the human face, and other first face characteristic parameters are face characteristic parameters of the front face of the human face, matching the face characteristic parameters belonging to the side face in the first face characteristic parameters with prestored side face characteristic parameters, and not processing the face characteristic parameters belonging to the front face of the human face in the first face characteristic parameters. Matching the facial feature parameters of the side face of the human face in the currently extracted first facial feature parameters with prestored side face facial feature parameters, and controlling a camera in a preset monitoring area to stop following monitoring; when the facial feature parameters of the side face of the human face in the extracted first facial feature parameters are not matched with the pre-stored side face facial feature parameters, the camera in the preset monitoring area can be controlled to follow and monitor the personnel corresponding to the first facial feature parameters.
In this embodiment, through the steps S001 to S004, it is possible to recognize whether the identity of a person entering the monitoring area is legal without photographing the front face of the face, so that it is possible to ensure the safety of the area and facilitate quick recognition of a legal person and close the follow-up monitoring, thereby further improving the privacy of a legal user in the area.
Further, based on any of the above embodiments, a further embodiment of the monitoring method of the present application is provided. In this embodiment, referring to fig. 6, in step S40, if step S40a is defined as step S40, before step S40a, the method further includes:
step S410, if the first facial feature parameter is matched with the pre-stored facial feature parameter, acquiring user identity information corresponding to the pre-stored facial feature parameter matched with the first facial feature parameter;
specifically, when the user enters the facial feature parameters of the user to form the pre-stored facial feature parameters, the user identity information and the pre-stored facial feature parameters can be bound at the same time. The user identity information may specifically include visitors, residents, and the like, and the residents may be further divided into owners of different areas in different cells, and the like.
Step S420, checking the passing authority of the personnel corresponding to the first surface characteristic parameter in the preset monitoring area according to the user identity information;
different user identity information can be correspondingly bound with the passing authorities of different monitoring areas. The passing authority corresponding to the user identity information can be configured in advance. For example, when the monitored area includes 1, 2, 3, 4, 5 areas, and the user identity is a resident in the monitored area 1, the monitored area having the right to pass may include the monitored areas 1, 2, 3; when the user identity is a resident in the monitoring area 5, the monitoring area with the passing authority can comprise the monitoring areas 4 and 5; when the user identity is a visitor within the monitoring area 3, the monitoring area having the right of passage may include the monitoring area 3.
Specifically, when the preset monitoring area is the monitoring area 1 and the monitoring area bound by the user identity information contains the monitoring area 1, it is indicated that the user corresponding to the first facial feature parameter has the right of passage in the preset monitoring area, and the verification is passed; when the preset monitoring area is the monitoring area 1 and the monitoring area bound by the user identity information does not contain the monitoring area 1, it is indicated that the user corresponding to the first face characteristic parameter does not have the right of passage in the preset monitoring area, and the verification is not passed.
Step S430, judging whether the verification is passed;
if the verification passes, go to step S40 a; if the verification is not passed, step S50 is executed.
In this embodiment, in the above manner, when a person corresponding to the first surface characteristic parameter has a right of way in the preset area, the camera in the preset monitoring area is linked to close the follow-up monitoring; otherwise, follow-up monitoring of the personnel is maintained to ensure the safety of the area.
Further, step S40a may include: if the preset monitoring area has a relevant monitoring area, checking the passing permission of the personnel corresponding to the first surface characteristic parameter in the relevant monitoring area according to the user identity information; and if the verification is passed, linking the preset monitoring area and the camera in the associated monitoring area to close the follow-up monitoring. Specifically, the different preset monitoring areas may be bound with their related areas or not bound with any area in advance according to actual use requirements. For example, the monitoring area 1 as a residential area may be associated with the monitoring area 2 as a leisure area. If the preset monitoring area is the monitoring area 1, the first facial feature parameters recognized by the cameras in the monitoring area 1 are matched with the pre-stored facial feature parameters, user identity information corresponding to the pre-stored facial feature parameters matched with the first facial feature parameters is obtained, and if the areas with the passing authority bound by the obtained user identity information comprise the monitoring areas 1 and 2, all the cameras in the monitoring area 1 and the monitoring area 2 can be linked to close following monitoring on personnel corresponding to the first facial feature parameters; if the obtained user identity information bound region with the passing authority comprises the monitoring region 1, all cameras in the monitoring region 1 can be linked to close the following monitoring of the personnel corresponding to the first face characteristic parameter, and meanwhile, the cameras in the monitoring region 2 are maintained to maintain the following monitoring of the personnel corresponding to the first face characteristic parameter. Here, through the above manner, when a user legally enters a certain area, the following monitoring of the area is closed, and the following monitoring of the area is closed in linkage with the areas where other legal users have authority, so that the intelligentization degree of the area monitoring and the privacy of the user in the area where the user legally enters are further improved.
It should be noted that, when the step S40a in the present embodiment includes the step S05 and the step S003 in the above embodiment before, after the step S003 or the step S05, the steps S410 to S430 in the present embodiment may be executed first, and then the step S40a may be executed.
Further, based on any one of the above embodiments, another embodiment of the monitoring method of the present application is provided. In this embodiment, referring to fig. 7, after step S40, the method further includes:
step S60, controlling the camera in the preset monitoring area to switch to a set shooting state for operation;
and if no human body exists in the preset monitoring area, returning to execute the step S10. In addition, when a human body exists in the preset monitoring area, the camera is controlled to maintain a set shooting state and to stop following monitoring.
The preset shooting state comprises a first shooting state or a second shooting state, the first shooting state is a state that the image shooting direction of the camera faces a preset sub-region in the preset monitoring region, and the second shooting state is a state that the preset shooting state comprises a state that the image shooting direction of the camera dynamically changes according to a preset rule.
The setting of the photographing state may be set to the first photographing state or the second photographing state according to an actual demand of a user, even other photographing states other than the first photographing state and the second photographing state.
In the first photographing state, in the present embodiment, the setting sub-region may specifically refer to an entrance region of the monitoring region. In other embodiments, the setting sub-area may be set according to actual requirements, for example, the area where the middle sofa, the tea table and the like are located in the living room. When the shooting state is set to be the first shooting state, the shooting direction is fixed to face the set sub-area after the camera stops following the monitoring.
In the second photographing state, the rule is set to the speed, direction, dwell time, and the like of the change of the photographing direction. The setting rule can be set according to actual requirements, and when the setting shooting state is the second shooting state, the camera stops following the monitoring shooting direction and then makes dynamic changes irrelevant to the position of the human body in the monitoring area according to the setting rule.
Whether the human body leaves the preset monitoring area or not can be identified based on the image acquired by the camera, for example, if no human body image exists in the monitored image, the human body does not exist in the preset monitoring area, and if the human body image exists in the monitored image, the human body exists in the preset monitoring area. In other embodiments, whether a human body exists in the monitored area may also be obtained based on data analysis detected by other human body information detection modules (such as an infrared detection module, a sound detection module, and the like) in the preset monitored area.
In this embodiment, after the follow-up monitoring is stopped, whether follow-up monitoring needs to be executed is judged again after detecting that the human body leaves the preset monitoring area, and when the legal user moves in the preset monitoring area, the camera maintains a state that the follow-up monitoring cannot be carried out on the legal user, so that the user experience of the legal user in the long-time moving process in the monitoring area is further improved.
In addition, an embodiment of the present invention further provides a readable storage medium, where a monitoring program is stored on the readable storage medium, and the monitoring program, when executed by a processor, implements relevant steps of any embodiment of the above monitoring method.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, a monitoring system, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A monitoring method, characterized in that the monitoring method comprises the steps of:
acquiring a monitoring image shot by a camera;
judging whether the monitoring image comprises a face image, if so, extracting a first face characteristic parameter of the face image;
and if the first facial feature parameters are matched with the pre-stored facial feature parameters, the camera in the preset monitoring area is linked to close the following monitoring.
2. The monitoring method according to claim 1, wherein before the step of linking the closing of the camera in the preset monitoring area to follow the monitoring, the method further comprises the following steps:
judging whether the first facial feature parameters are matched with the prestored facial feature parameters;
if the first facial feature parameters are matched with the prestored facial feature parameters, executing the step of closing the camera in the linkage preset monitoring area to follow the monitoring;
and if the first facial feature parameters are not matched with the prestored facial feature parameters, controlling the camera to follow and monitor the personnel corresponding to the first facial feature parameters, and returning to the step of acquiring the monitoring image shot by the camera.
3. The monitoring method according to claim 2, wherein the preset monitoring area is provided with at least two cameras, different cameras are used for shooting the monitoring images at different angles, the pre-stored facial feature parameters comprise pre-stored front facial feature parameters, and the step of judging whether the first facial feature parameters are matched with the pre-stored facial feature parameters comprises:
judging whether the face angle corresponding to the first face characteristic parameter comprises a face front face or not;
if the face angle corresponding to the first face characteristic parameter comprises a face front face, judging whether the first face characteristic parameter corresponding to the face front face is matched with the prestored face characteristic parameter;
if the first facial feature parameters corresponding to the front face of the human face are matched with the prestored front face feature parameters, determining that the first facial feature parameters are matched with the prestored face feature parameters;
and if the first facial feature parameters corresponding to the front face of the human face are not matched with the pre-stored front face feature parameters, determining that the first facial feature parameters are not matched with the pre-stored facial feature parameters.
4. The monitoring method according to claim 3, wherein after the step of determining whether the face angle corresponding to the first facial feature parameter includes a face front, the method further comprises:
if the face angle corresponding to the first face characteristic parameter does not comprise the face front face, generating a second face characteristic parameter corresponding to the face front face according to the first face characteristic parameter;
if the second facial feature parameters are matched with the pre-stored front face feature parameters, determining that the first facial feature parameters are matched with the pre-stored facial feature parameters;
and if the second facial feature parameter is not matched with the pre-stored front face feature parameter, determining that the first facial feature parameter is not matched with the pre-stored facial feature parameter.
5. The monitoring method according to claim 3, wherein the pre-storing facial feature parameters further comprises pre-storing side face facial feature parameters, and after the step of determining whether the face angle corresponding to the first facial feature parameter includes a face front face, the method further comprises:
if the face angle corresponding to the first face characteristic parameter does not comprise the face front, judging whether the first face characteristic parameter is matched with the prestored side face characteristic parameter;
if the first facial feature parameters are matched with the prestored side face feature parameters, determining that the first facial feature parameters are matched with the prestored side face feature parameters;
and if the first facial feature parameters are not matched with the pre-stored side face feature parameters, determining that the first facial feature parameters are not matched with the pre-stored facial feature parameters.
6. The monitoring method according to claim 2, wherein the preset monitoring area is provided with at least two cameras, different cameras are used for shooting the monitoring images at different angles, the pre-stored facial feature parameters comprise pre-stored side face feature parameters, and the step of judging whether the first facial feature parameters are matched with the pre-stored facial feature parameters comprises:
if the face angle corresponding to the first face characteristic parameter comprises a face side, judging whether the first face characteristic parameter corresponding to the face side is matched with the prestored side face characteristic parameter;
if the first facial feature parameters corresponding to the side faces of the human faces are matched with the prestored side face feature parameters, determining that the first facial feature parameters are matched with the prestored facial feature parameters;
and if the first facial feature parameters corresponding to the side faces of the human faces are not matched with the pre-stored side face feature parameters, determining that the first facial feature parameters are not matched with the pre-stored facial feature parameters.
7. The monitoring method according to claim 1, wherein before the step of linking the closing of the camera in the preset monitoring area to follow the monitoring, the method further comprises the following steps:
if the first facial feature parameters are matched with the prestored facial feature parameters, acquiring user identity information corresponding to the prestored facial feature parameters matched with the first facial feature parameters;
checking the passing authority of the personnel corresponding to the first surface characteristic parameter in the preset monitoring area according to the user identity information;
if the verification is passed, executing the step of closing the camera in the linkage preset monitoring area to follow the monitoring;
and if the verification fails, controlling the camera to follow and monitor the personnel corresponding to the first surface characteristic parameter.
8. The monitoring method according to claim 7, wherein the step of linking the closing of the camera in the preset monitoring area to follow the monitoring comprises:
if the preset monitoring area has a relevant monitoring area, checking the passing permission of the personnel corresponding to the first surface characteristic parameter in the relevant monitoring area according to the user identity information;
and if the verification is passed, linking the preset monitoring area and the camera in the associated monitoring area to close the follow-up monitoring.
9. The monitoring method according to any one of claims 1 to 8, wherein after the step of linking the closing of the camera in the preset monitoring area to follow the monitoring, further comprising:
controlling the camera in the preset monitoring area to be switched to a set shooting state to operate;
if no human body exists in the preset monitoring area, returning to the step of acquiring the monitoring image shot by the camera;
the preset shooting state comprises a first shooting state or a second shooting state, the first shooting state is a state that the image shooting direction of the camera faces a preset sub-region in the preset monitoring region, and the second shooting state is a state that the image shooting direction of the camera dynamically changes according to a preset rule.
10. A monitoring system, characterized in that the monitoring system comprises:
the system comprises cameras, a camera module and a control module, wherein the number of the cameras is at least two, and different cameras are used for shooting monitoring images at different angles;
a control device comprising a memory, a processor and a monitoring program stored on the memory and executable on the processor, the processor being connected to the camera, the monitoring program when executed by the processor implementing the steps of the monitoring method according to any one of claims 1 to 9.
CN202010976097.5A 2020-09-16 2020-09-16 Face recognition monitoring method and system Pending CN112019754A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010976097.5A CN112019754A (en) 2020-09-16 2020-09-16 Face recognition monitoring method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010976097.5A CN112019754A (en) 2020-09-16 2020-09-16 Face recognition monitoring method and system

Publications (1)

Publication Number Publication Date
CN112019754A true CN112019754A (en) 2020-12-01

Family

ID=73522362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010976097.5A Pending CN112019754A (en) 2020-09-16 2020-09-16 Face recognition monitoring method and system

Country Status (1)

Country Link
CN (1) CN112019754A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635654A (en) * 2014-10-30 2016-06-01 杭州萤石网络有限公司 Video monitoring method, device and system, and camera
CN106897716A (en) * 2017-04-27 2017-06-27 广东工业大学 A kind of dormitory safety monitoring system and method
CN107105199A (en) * 2017-04-20 2017-08-29 武汉康慧然信息技术咨询有限公司 Smart home nurse method and system based on technology of Internet of things
CN108933888A (en) * 2017-05-22 2018-12-04 中兴通讯股份有限公司 A kind of camera control method, equipment and computer storage medium
CN110290352A (en) * 2019-06-28 2019-09-27 Oppo广东移动通信有限公司 Monitoring method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635654A (en) * 2014-10-30 2016-06-01 杭州萤石网络有限公司 Video monitoring method, device and system, and camera
CN107105199A (en) * 2017-04-20 2017-08-29 武汉康慧然信息技术咨询有限公司 Smart home nurse method and system based on technology of Internet of things
CN106897716A (en) * 2017-04-27 2017-06-27 广东工业大学 A kind of dormitory safety monitoring system and method
CN108933888A (en) * 2017-05-22 2018-12-04 中兴通讯股份有限公司 A kind of camera control method, equipment and computer storage medium
CN110290352A (en) * 2019-06-28 2019-09-27 Oppo广东移动通信有限公司 Monitoring method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10839228B2 (en) Method and system for tracking an object in a defined area
KR102553883B1 (en) A method for generating alerts in a video surveillance system
US20170289504A1 (en) Privacy Supporting Computer Vision Systems, Methods, Apparatuses and Associated Computer Executable Code
CN110188603B (en) Privacy anti-leakage method and system for smart community
CN108600202B (en) Information processing method and device and computer readable storage medium
CN110738769A (en) forbidden user identification method, device, system and computer equipment
CN105427421A (en) Entrance guard control method based on face recognition
WO2014187134A1 (en) Method and apparatus for protecting browser private information
CN104881911A (en) System And Method Having Biometric Identification Instrusion And Access Control
KR101838858B1 (en) Access control System based on biometric and Controlling method thereof
KR102237086B1 (en) Apparatus and method for controlling a lobby phone that enables video surveillance through a communication terminal that can use a 5G mobile communication network based on facial recognition technology
KR101858396B1 (en) Intelligent intrusion detection system
KR102012672B1 (en) Anti-crime system and method using face recognition based people feature recognition
CN204990444U (en) Intelligent security controlgear
JP6789601B2 (en) A learning video selection device, program, and method for selecting a captured video masking a predetermined image area as a learning video.
KR101941966B1 (en) Apparatus, method and program for access control based on pattern recognition
CN111881726A (en) Living body detection method and device and storage medium
KR20160072386A (en) Home network system using face recognition based features and method using the same
CN111917981A (en) Privacy protection method, device, equipment and computer readable storage medium
Barhm et al. Negotiating privacy preferences in video surveillance systems
CN110675582A (en) Automatic alarm method and device
KR101879444B1 (en) Method and apparatus for operating CCTV(closed circuit television)
KR20170013597A (en) Method and Apparatus for Strengthening of Security
CN109522782A (en) Household member's identifying system
JP2010009389A (en) Dictionary information registration device, dictionary information registration method, face authentication device, and access control system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211215

Address after: 528000 room 2001, building 4, Midea Fortune Plaza, No.1, Chengde Road, Junlan community, Beijiao Town, Shunde District, Foshan City, Guangdong Province

Applicant after: Guangdong Ruizhu Intelligent Technology Co.,Ltd.

Address before: 528000 Beijiao International Wealth Center (Wanlian Center), No.1 Yifu Road, Junlan community committee, Beijiao Town, Shunde District, Foshan City, Guangdong Province

Applicant before: Ruizhu Technology Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201201