CN111339840B - Face detection method and monitoring system - Google Patents

Face detection method and monitoring system Download PDF

Info

Publication number
CN111339840B
CN111339840B CN202010085325.XA CN202010085325A CN111339840B CN 111339840 B CN111339840 B CN 111339840B CN 202010085325 A CN202010085325 A CN 202010085325A CN 111339840 B CN111339840 B CN 111339840B
Authority
CN
China
Prior art keywords
human body
frame
face
image
radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010085325.XA
Other languages
Chinese (zh)
Other versions
CN111339840A (en
Inventor
黄昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010085325.XA priority Critical patent/CN111339840B/en
Publication of CN111339840A publication Critical patent/CN111339840A/en
Application granted granted Critical
Publication of CN111339840B publication Critical patent/CN111339840B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • G01S13/08Systems for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a face detection method and a monitoring system. The method comprises the following steps: acquiring a first image shot by a camera on a scene; identifying at least one face frame in the first image; acquiring a second image of a scene scanned by a radar; dividing a comparison area corresponding to the face frame on the second image, wherein the coverage area of the comparison area to the scene contains and is larger than the coverage area of the face frame to the scene; identifying at least one human body candidate frame in the comparison area; and performing truth evaluation based on the human body candidate frame. By the aid of the mode, authenticity of the face detected by the camera can be effectively judged by aid of the radar.

Description

Face detection method and monitoring system
Technical Field
The present application relates to the field of face recognition, and in particular, to a face detection method and a monitoring system based on a radar-assisted camera.
Background
With the development of science and technology, video monitoring means are widely applied. In video monitoring, the face recognition technology is mature. However, in a specific application, a large number of interferents such as posters, human body models, and the like exist, and therefore, how to ensure that the recognized face is a real face (i.e., a living body face) is an urgent problem to be solved.
Disclosure of Invention
The application provides a face detection method and a monitoring system, which aim to solve the problem that the authenticity of a face cannot be judged in a camera detection mode in the prior art.
In order to solve the technical problem, one technical scheme adopted by the application is a face detection method, which comprises the following steps: acquiring a first image shot by a camera on a scene; identifying at least one face frame in the first image; acquiring a second image of a scene scanned by a radar; dividing a comparison area corresponding to the face frame on the second image, wherein the coverage area of the comparison area to the scene contains and is larger than the coverage area of the face frame to the scene; identifying at least one human body candidate frame in the comparison area; and evaluating the truth degree based on the human body candidate frame.
In order to solve the above technical problem, another technical solution adopted by the present application is a monitoring system, which includes a camera, a radar and a processing system, wherein the processing system is configured to execute a face detection method.
This application sets up the comparative region that corresponds through the people's face frame that the camera detected in the radar image to discernment human candidate frame in the comparative region, reuse human candidate frame to assist the authenticity of judging the people's face, can utilize the supplementary authenticity of judging the people's face that the camera detected of radar effectively.
Drawings
To more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic view of a monitoring system of the present application;
FIG. 2 is a schematic flow diagram of a face detection method of the present application;
fig. 3 is a schematic view of a radar scan area.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring specifically to fig. 1, fig. 1 is a schematic diagram of a monitoring system of the present application. As shown, the monitoring system includes a camera, a radar, and a processing system. The camera is in communication connection with the processing system. The radar is in communication with the processing system. The camera is in direct communication connection with the radar or in indirect communication connection through the processing system. The communication connection here may comprise a wired or wireless connection.
The number of cameras is not limited in the present application, and although two cameras are shown in fig. 1, the monitoring system of the present application may include only one camera, or may include more than two cameras. The effective scanning range of the radar covers the shooting range of all cameras.
Alternatively, the monitoring system of the present application may not include a separate processing system, and its corresponding functions may be implemented by the camera and the processor included in the radar itself. The specific method of the present application will be described in detail below, taking as an example the processor included in the camera and the radar itself. The specific methods of the present application may be realized centrally by the same processor.
Specifically, before the monitoring system is put into use, the camera and the radar need to be calibrated.
The calibration of the camera is used for determining the corresponding relation between pixel points in the imaging of the camera and space points in the real world.
In one embodiment, the calibration of the camera is mainly used to determine the position of a human body corresponding to a target object such as a human face in the image of the camera in a two-dimensional coordinate system of the camera (for example, the ground monitored by the camera).
The imaging position of the three-dimensional object in the real world in the camera is related to the position of the three-dimensional object in the real world, the angle and the position of the camera, the imaging distortion of the camera and the like. The calibration of the camera determines the corresponding relationship between the pixel points and the space points through a reference object, for example. Optionally, the calibration is performed by using a camera self-calibration method.
For example, in a crosswalk red light running monitoring system to which the present application is applied, the position and angle of a camera in use are generally fixed, and the size of pedestrians monitored by the camera is regular. The operator can calibrate the camera by means of a person or a human body model as a reference.
The application does not limit the specific camera calibration mode.
The calibration of the radar is used to determine the correspondence of points in the scanning space of the radar to spatial points in the real world.
In one embodiment, the calibration of the radar is primarily used to determine the position of a target object scanned by the radar in a two-dimensional coordinate system of the radar (e.g., the ground monitored by the radar).
In some embodiments, calibration of the radar further comprises determining the radar reflectivity of a range of targets, or determining a scaling factor between the radar echo intensity and the radar receiver output power value for those targets.
For example, in the pedestrian crossing red light running monitoring system applying the application, by calibrating the radar, on one hand, the corresponding relation between points in the scanning space of the radar and space points in the real world can be established, and on the other hand, the radar reflectivity of common targets such as a road surface, a human body, vehicles and the like can be determined as reference parameters.
The calibration of the camera and the radar also comprises the joint calibration of the camera and the radar. The joint calibration is used for determining the corresponding relationship between points in the scanning space of the radar and pixel points imaged by the camera, or is used for determining the relationship between the two-dimensional coordinate system of the camera and the two-dimensional coordinate system of the radar, namely, the space synchronization of the radar and the camera. This correspondence is typically approximate due to the presence of errors.
The joint calibration of the camera and the radar also includes the field angle of the radar-synchronized camera, i.e. the two-dimensional acquisition area of the camera, i.e. the ground acquisition area of the camera, is marked in the scanning space of the radar.
In some embodiments, the radar may be a rotary mechanical radar, a solid state lidar, or the like. The present application does not limit the specific type of radar as long as it can achieve the object of the present application.
With particular reference to fig. 2. Fig. 2 is a schematic flow chart of the face detection method of the present application. The face detection method of the embodiment comprises the following steps.
Step S11, a first image of a scene captured by a camera is acquired.
In one embodiment, one or more cameras capture corresponding scenes or respective monitored areas to obtain a first image. The first image may be acquired by a separate processing system or a processor integrated with the camera.
The one or more cameras respectively carry out continuous shooting on one or more corresponding scenes, and therefore continuous multi-frame images of the corresponding scenes are obtained.
And S12, identifying at least one face frame T in the first image.
The processor of the camera may process the first image through a face recognition algorithm to identify at least one face frame T. Here, the face frame refers to a frame including a rectangular frame, a circular frame, or any other arbitrary shape that can frame the detected face.
Further, the processor of the camera may mark the at least one face frame T to identify the same at least one face frame T in consecutive multi-frame images, so as to obtain trajectory information of the at least one face frame T, or further obtain motion state information of the at least one face frame T, such as information of speed, acceleration, motion trajectory, motion direction, and the like of the at least one face frame T. For example, the camera detects that the face frame T moves from left to right at a speed V in the multi-frame image.
The processor of the camera may also extract human skeleton information of a region around the face frame from the consecutive multiple frames of the first image using a skeleton extraction algorithm. The human body skeleton information includes human body skeleton characteristics, human body motion posture information such as human body gait characteristics, and the like.
For example, in a pedestrian crossing red light running monitoring system to which the present application is applied, one or more cameras monitor a road surface near a red light. At this time, the processor of the camera processes the first image obtained from the camera to obtain at least one face frame T. According to the mark of the camera, the area of the object corresponding to the at least one face frame T on the road surface can be further obtained. That is, in the scene, the two-dimensional face region corresponding to the at least one face frame T is an approximate region of the human body object corresponding to the at least one face frame T on the road surface.
Step S13, a second image of the scene scanned by the radar is acquired.
In one embodiment, the processor of the radar acquires a second image obtained by scanning the monitoring area of one or more cameras by the radar. In the calibration of the radar and the camera, the radar synchronizes the field angle of the camera, that is, a two-dimensional acquisition area of the camera is obtained. For example, the radar synchronously photographs the target corresponding to the face frame T according to the moving direction and the moving speed of at least one face frame T transmitted from the processor of the camera.
Optionally, the radar scans the corresponding scene upon receiving a request from the processor of the one or more cameras, resulting in the second image.
Optionally, the radar scans the entire scanning area of the radar upon receiving multiple requests from one or more cameras or processors, resulting in the second image.
Optionally, the radar scans the two-dimensional acquisition area of the corresponding camera after receiving the two-dimensional face area corresponding to the at least one face frame T from the camera or the processor.
And S14, dividing a comparison area corresponding to the face frame T on the second image, wherein the coverage of the comparison area to the scene contains and is greater than the coverage of the face frame T to the scene.
With particular reference to fig. 3. Fig. 3 is a schematic diagram of a radar scan area showing a second image 304, a mapping area 300 of a face box T on the second image 304, a comparison area 302, one or more human candidate boxes 306.
The second image 304 may be obtained by synchronously scanning the acquisition region of the corresponding camera by the radar.
As described above, the camera and the radar are calibrated in advance, and the spatial transformation relationship between the camera coordinate system of the camera and the radar coordinate system of the radar may be known. Therefore, the face frame T can be mapped from the first image to the second image 304 by using the above-mentioned spatial mapping relationship, and the corresponding mapping region 300 can be formed. Further, the mapped region 300 in the second image 304 is enlarged to obtain a comparison region 302.
Optionally, the coordinate systems of the first and second images 304 are normalized. That is, the coordinate system of the first image and the coordinate system of the second image 304 are made to coincide.
The comparison area 302 is formed by taking the center point of the mapping area 300 as the center of the comparison area and enlarging the width and height of the mapping area 300 according to a certain ratio K.
In one embodiment, the height magnification ratio of the mapping region 300 may be set to be greater than the width magnification ratio, and the ratio of the two may be set to the aspect ratio of a normal human body.
Alternatively, the face frame T and the mapping area 300 may not correspond precisely due to the presence of measurement errors of the radar and the camera, calibration errors, communication delay in the system, and the like, and thus a comparison area 302 larger than the face frame T needs to be divided.
In one embodiment, K is a real number between 1 and 5. Optionally, K has a value of 3. The value of K is related to the calibration progress, the communication delay between the radar and the camera, the target movement speed and the like.
In yet another embodiment, when the resolutions of the radar and the camera to the same area are different, the normalization operation is performed on the corresponding areas of the radar and the camera.
At least one human body candidate box is identified in the comparison area 302S 15.
As described above, the radar can detect the distance of the target from the radar, i.e., the depth data of the target. Alternatively, the radar can detect the reflectivity of the target to the radar wave. In the present application, a human face is substantially identical to human depth information, and has a large difference in depth from a background such as a road surface. And the reflectivity of the human face and the reflectivity of the human body to the radar waves are basically consistent, and the human face and the background such as a road have larger difference in the reflectivity to the radar waves.
Accordingly, the processor of the radar identifies a continuous area or a single connected area in which the depth difference and/or the reflectivity difference is less than or equal to a preset depth difference threshold and/or reflectivity difference threshold in the comparison area, and selects a minimum rectangular area capable of being framed to the area as the human body candidate frame 306.
The depth difference threshold and/or the reflectivity difference threshold are set by a user according to experience and actual application scenarios.
After identifying the human body candidate frame, the radar may mark the identified human body candidate frame and track the marked human body candidate frame through scanning to obtain a motion state of the human body candidate frame, such as a motion track, a motion direction, a speed, an acceleration, and the like.
The processor of the radar may use a skeleton extraction algorithm to obtain the body skeleton information of the body candidate frame in the second image 304 of the plurality of frames from the radar through multiple scans. The human body skeleton information includes human body skeleton characteristics, human body motion posture information such as human body gait characteristics, and the like.
The resulting human candidate box according to step S15 is often more than one.
Optionally, the obtained human body candidate frames may be primarily screened to eliminate targets that are obviously non-human bodies and human body candidate frames that are obviously inconsistent with the face frame detected by the camera. And if no face candidate frame remains after the preliminary screening, judging the face in the face frame to be a false face.
The following describes a method of prescreening human candidate frames.
In one embodiment, whether the difference between the motion state of the human body candidate frame and the motion state of the human face frame is larger than or equal to a preset motion difference threshold value or not is judged, and the human body candidate frame larger than or equal to the preset motion difference threshold value is removed, so that the human body candidate frame obviously inconsistent with the human face frame detected by the camera is removed from the human body candidate frames. The preset motion difference threshold may be determined empirically or experimentally by the user.
In this embodiment, the motion state of the human body candidate frame and the motion state of the face frame are obtained from the analysis of the plurality of frames of the first image and the plurality of frames of the second image 304, respectively. Wherein the motion state is characterized by one or more of a motion trajectory, a motion direction, a velocity, an acceleration, and the like.
In another embodiment, it is determined whether the difference between the aspect ratio of the human body frame candidates and the aspect ratio standard is greater than or equal to a preset aspect ratio difference threshold, and the human body frame candidates greater than or equal to the aspect ratio difference threshold are removed, so as to remove the human body frame candidates that are obviously non-human from the human body frame candidates.
The aspect ratio of human males, i.e. the ratio of shoulder width to height, is around 0.28 and that of females around 0.25. Due to errors in the actual data processing process of the radar and the influence of human dress and the like, the appropriate preset aspect ratio difference thresholds may be different. The user may determine the preset aspect ratio difference threshold by performing field experiments on the monitoring system. The user may also determine the preset aspect ratio difference threshold based on past experience.
In another embodiment, whether the difference between the first skeleton information of the human body candidate frame and the human body skeleton standard is larger than or equal to a preset skeleton difference threshold value or not is judged, and the human body candidate frame larger than or equal to the skeleton difference threshold value is removed, so that the human body candidate frame obviously non-human is removed from the human body candidate frames.
The human skeleton standard includes, for example, human skeleton topology information, limb posture information corresponding to human body movement, and the like.
For example, in a monitoring scene of a pedestrian crossing, a human body candidate frame corresponding to a passing pedestrian has very obvious human body limb information. Such as a portrait poster carried by a person or vehicle, is rejected because there is no limb movement posture information corresponding to the movement.
In another embodiment, whether the difference between the first skeleton information and the second skeleton information of the surrounding area of the face frame is greater than or equal to a preset skeleton difference threshold value or not is judged, and the human body candidate frame greater than or equal to the skeleton difference threshold value is removed, so that the human body candidate frame obviously inconsistent with the face frame detected by the camera is removed from the human body candidate frame.
Optionally, determining the difference between the first skeletal information and the second skeletal information comprises determining a difference in motion pose information in the first skeletal information and the second skeletal information.
And S16, evaluating the truth based on the human body candidate frame.
In one embodiment, the similarity between at least one index obtained by the human body candidate frame and the reference standard is calculated and used as the score of the human body candidate frame. And judging whether the score of the human body candidate frame is greater than or equal to a preset score threshold value. And if the score is greater than or equal to the score threshold value, judging the face in the face frame to be a real face.
Optionally, the calculating the similarity between the at least one index obtained through the human body candidate box and the reference standard includes: calculating the similarity between the aspect ratio of the human body candidate frame and a preset human body aspect ratio standard; calculating the similarity between the motion state of the human body candidate frame and the motion state of the human face frame; the method includes the steps of calculating similarity between first skeleton information obtained through a human body candidate frame and a human body skeleton standard and calculating similarity between the first skeleton information and second skeleton information obtained through a peripheral region of a face frame of a first image.
Optionally, the score is a weighted sum of one or at least two of the above similarities.
Optionally, if the maximum value of the scores of one or more face frames is greater than or equal to the score threshold, the face in the face frame is determined to be a real face. Otherwise, the face in the face frame is judged to be a false face.
Alternatively, if the face in the face frame is determined to be a dummy face, the step S14 is re-entered according to the information of the face frame that is tracked and updated by the camera until the face in the face frame is determined to be a real face, or until a preset execution time is exceeded. And if the execution time exceeds the preset execution time, judging the face in the face frame to be a false face. The preset execution time is set by the user based on experience or experiment.
To sum up, this application sets up the comparative region that corresponds through the people's face frame that the camera detected in the radar image to in the comparative region discernment human candidate frame, reuse human candidate frame come the supplementary authenticity of judging the people's face, can utilize the supplementary authenticity of judging the people's face that the camera detected of radar effectively.
Compared with the prior art, this application does not need radar and camera to combine by force to a radar can be served for a plurality of cameras simultaneously. Therefore, by using the method of the invention, the existing camera can be combined with the radar after simple software upgrading and other operations. Therefore, the invention can fully utilize the existing social resources and has low cost and high efficiency.
The above description is only an embodiment of the present application, and is not intended to limit the scope of the present application, and all equivalent structures or equivalent processes performed by the present application and the contents of the attached drawings, which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A method for face detection, the method comprising:
acquiring a first image shot by a camera on a scene;
identifying at least one face frame in the first image;
acquiring a second image of the scene scanned by the radar;
dividing a comparison area corresponding to the face frame on the second image, wherein the coverage of the comparison area on the scene is contained and larger than the coverage of the face frame on the scene;
identifying at least one human body candidate frame in the comparison area;
and evaluating the truth based on the human body candidate frame.
2. The method according to claim 1, wherein the step of dividing the comparison area corresponding to the face frame on the second image comprises:
mapping the face frame from the first image to the second image according to a spatial conversion relation between a camera coordinate system of the camera and a radar coordinate system of the radar;
and amplifying the face frame mapped in the second image to obtain the comparison area.
3. The method of claim 1, wherein the step of identifying at least one human candidate box in the comparison area comprises:
identifying a single connected domain with the depth difference and/or the reflectivity difference smaller than or equal to a preset depth difference threshold and/or a reflectivity difference threshold in the comparison area;
and taking the minimum rectangular area which can be framed to the single connected domain as the human body candidate frame.
4. The method of claim 1, wherein the step of performing a plausibility evaluation based on the human frame candidates comprises:
calculating the similarity between at least one index obtained through the human body candidate frame and a reference standard, and taking the similarity as the score of the human body candidate frame;
judging whether the score of the human body candidate frame is greater than or equal to a preset score threshold value or not;
and if the score is larger than or equal to the score threshold, judging the face in the face frame to be a real face.
5. The method of claim 4, wherein the score is a weighted sum of one or at least two of the following similarities:
the method comprises the steps of obtaining a first image of a human face, obtaining a human body candidate frame, obtaining similarity between an aspect ratio of the human body candidate frame and a preset human body aspect ratio standard, obtaining similarity between a motion state of the human body candidate frame and a motion state of the human face frame, obtaining similarity between first skeleton information and the human body skeleton standard through the human body candidate frame, and obtaining similarity between the first skeleton information and second skeleton information through a peripheral region of the human face frame of the first image.
6. The method of claim 5, wherein the motion state of the human body candidate frame and the motion state of the human face frame respectively comprise at least one or a combination of a motion speed, a motion trajectory and a motion direction.
7. The method of claim 5, wherein the step of performing a plausibility evaluation based on the human body candidate box is preceded by the step of:
judging whether the difference between the motion state of the human body candidate frame and the motion state of the human face frame is larger than or equal to a preset motion difference threshold value or not;
and eliminating the human body candidate frame which is larger than or equal to a preset motion difference threshold value.
8. The method of claim 5, wherein the step of performing the truth evaluation based on the human frame candidates further comprises:
judging whether the difference between the aspect ratio of the human body candidate frame and the human body aspect ratio standard is larger than or equal to a preset aspect ratio difference threshold value or not;
and eliminating the human body candidate frame which is larger than or equal to the aspect ratio difference threshold value.
9. The method of claim 5, wherein the step of performing the truth evaluation based on the human frame candidates further comprises:
judging whether the difference between the first skeleton information and the second skeleton information and/or the difference between the first skeleton information and the human body skeleton standard is larger than or equal to a preset skeleton difference threshold value or not;
rejecting the human body candidate frames which are greater than or equal to the skeleton difference threshold.
10. A monitoring system comprising a camera, a radar, and a processing system, wherein the processing system is configured to perform the method of any one of claims 1-9.
CN202010085325.XA 2020-02-10 2020-02-10 Face detection method and monitoring system Active CN111339840B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010085325.XA CN111339840B (en) 2020-02-10 2020-02-10 Face detection method and monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010085325.XA CN111339840B (en) 2020-02-10 2020-02-10 Face detection method and monitoring system

Publications (2)

Publication Number Publication Date
CN111339840A CN111339840A (en) 2020-06-26
CN111339840B true CN111339840B (en) 2023-04-07

Family

ID=71185251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010085325.XA Active CN111339840B (en) 2020-02-10 2020-02-10 Face detection method and monitoring system

Country Status (1)

Country Link
CN (1) CN111339840B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661903B (en) * 2022-11-10 2023-05-02 成都智元汇信息技术股份有限公司 Picture identification method and device based on space mapping collaborative target filtering

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007304033A (en) * 2006-05-15 2007-11-22 Honda Motor Co Ltd Monitoring device for vehicle periphery, vehicle, vehicle peripheral monitoring method, and program for vehicle peripheral monitoring
CN103530599A (en) * 2013-04-17 2014-01-22 Tcl集团股份有限公司 Method and system for distinguishing real face and picture face
CN109190448A (en) * 2018-07-09 2019-01-11 深圳市科脉技术股份有限公司 Face identification method and device
CN109241839A (en) * 2018-07-31 2019-01-18 安徽四创电子股份有限公司 A kind of camera shooting radar joint deployment implementation method based on face recognition algorithms
WO2019071739A1 (en) * 2017-10-13 2019-04-18 平安科技(深圳)有限公司 Face living body detection method and apparatus, readable storage medium and terminal device
CN109800699A (en) * 2019-01-15 2019-05-24 珠海格力电器股份有限公司 Image-recognizing method, system and device
CN110059644A (en) * 2019-04-23 2019-07-26 杭州智趣智能信息技术有限公司 A kind of biopsy method based on facial image, system and associated component
CN110490030A (en) * 2018-05-15 2019-11-22 保定市天河电子技术有限公司 A kind of channel demographic method and system based on radar

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10102419B2 (en) * 2015-10-30 2018-10-16 Intel Corporation Progressive radar assisted facial recognition

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007304033A (en) * 2006-05-15 2007-11-22 Honda Motor Co Ltd Monitoring device for vehicle periphery, vehicle, vehicle peripheral monitoring method, and program for vehicle peripheral monitoring
CN103530599A (en) * 2013-04-17 2014-01-22 Tcl集团股份有限公司 Method and system for distinguishing real face and picture face
WO2019071739A1 (en) * 2017-10-13 2019-04-18 平安科技(深圳)有限公司 Face living body detection method and apparatus, readable storage medium and terminal device
CN110490030A (en) * 2018-05-15 2019-11-22 保定市天河电子技术有限公司 A kind of channel demographic method and system based on radar
CN109190448A (en) * 2018-07-09 2019-01-11 深圳市科脉技术股份有限公司 Face identification method and device
CN109241839A (en) * 2018-07-31 2019-01-18 安徽四创电子股份有限公司 A kind of camera shooting radar joint deployment implementation method based on face recognition algorithms
CN109800699A (en) * 2019-01-15 2019-05-24 珠海格力电器股份有限公司 Image-recognizing method, system and device
CN110059644A (en) * 2019-04-23 2019-07-26 杭州智趣智能信息技术有限公司 A kind of biopsy method based on facial image, system and associated component

Also Published As

Publication number Publication date
CN111339840A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
KR101647370B1 (en) road traffic information management system for g using camera and radar
EP2234086B1 (en) Vehicle periphery monitoring device, vehicle, vehicle periphery monitoring program, and vehicle periphery monitoring method
CN113156421A (en) Obstacle detection method based on information fusion of millimeter wave radar and camera
JP4203512B2 (en) Vehicle periphery monitoring device
JP3822515B2 (en) Obstacle detection device and method
CN112215306B (en) Target detection method based on fusion of monocular vision and millimeter wave radar
CN111045000A (en) Monitoring system and method
CN114022830A (en) Target determination method and target determination device
CN111753609A (en) Target identification method and device and camera
JP6524529B2 (en) Building limit judging device
CN113537049B (en) Ground point cloud data processing method and device, terminal equipment and storage medium
CN114118252A (en) Vehicle detection method and detection device based on sensor multivariate information fusion
CN106803262A (en) The method that car speed is independently resolved using binocular vision
WO2010007718A1 (en) Vehicle vicinity monitoring device
CN115327572A (en) Method for detecting obstacle in front of vehicle
JP4123138B2 (en) Vehicle detection method and vehicle detection device
CN111339840B (en) Face detection method and monitoring system
CN112733678A (en) Ranging method, ranging device, computer equipment and storage medium
JP2006041939A (en) Monitor device and monitor program
JP4788399B2 (en) Pedestrian detection method, apparatus, and program
JP4765113B2 (en) Vehicle periphery monitoring device, vehicle, vehicle periphery monitoring program, and vehicle periphery monitoring method
Itu et al. An efficient obstacle awareness application for android mobile devices
JP5785515B2 (en) Pedestrian detection device and method, and vehicle collision determination device
US11694446B2 (en) Advanced driver assist system and method of detecting object in the same
JP2008108118A (en) Image processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant