CN111784885A - Passage control method and device, gate equipment and multi-gate system - Google Patents

Passage control method and device, gate equipment and multi-gate system Download PDF

Info

Publication number
CN111784885A
CN111784885A CN202010556457.6A CN202010556457A CN111784885A CN 111784885 A CN111784885 A CN 111784885A CN 202010556457 A CN202010556457 A CN 202010556457A CN 111784885 A CN111784885 A CN 111784885A
Authority
CN
China
Prior art keywords
face
gate
camera
coordinate system
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010556457.6A
Other languages
Chinese (zh)
Other versions
CN111784885B (en
Inventor
梁桥
步青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202010556457.6A priority Critical patent/CN111784885B/en
Publication of CN111784885A publication Critical patent/CN111784885A/en
Application granted granted Critical
Publication of CN111784885B publication Critical patent/CN111784885B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/10Movable barriers with registering means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/38Individual registration on entry or exit not involving the use of a pass with central registration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a traffic control method, a traffic control device, gate equipment and a multi-gate system, wherein the method comprises the following steps: carrying out face detection on an image acquired by a target camera; when a face is detected in an image acquired by a target camera, extracting face key point characteristic information and pairing to obtain key point pairs; and determining the coordinates of the face under a coordinate system of the traffic control equipment based on the pixel coordinates of the key point pairs, the calibration parameters of the target camera and the installation configuration parameters of the target camera, and further determining whether the position of the face belongs to a channel region corresponding to the target channel based on the coordinates of the face under the coordinate system of the traffic control equipment and the region range information of the target channel under the coordinate system of the traffic control equipment. The method and the device can realize accurate positioning of the region where the face is located.

Description

Passage control method and device, gate equipment and multi-gate system
Technical Field
The application relates to the technical field of space positioning, in particular to a traffic control method and device, gate equipment and a multi-gate system.
Background
Along with the development of the face recognition technology, the application of the face recognition technology is gradually popularized, and a gate control system based on the face recognition technology belongs to a popular application direction.
At present, a gate control system mainly acquires a face image of a person waiting to pass through the gate through a camera, and controls the gate to be opened when determining that a visitor has a passing right based on a face recognition technology.
However, practice has found that the existing gate system is generally configured with multiple gates (i.e. multiple gates), and if the gate where the person to pass through the gate is located cannot be accurately located, it may happen that the wrong gate is opened or multiple gates are opened simultaneously.
How to accurately position the channel where the gate personnel are located becomes a technical problem to be solved urgently.
Disclosure of Invention
In view of the above, the present application provides a positioning method, a positioning device, a gate device and a multi-gate system.
Specifically, the method is realized through the following technical scheme:
according to a first aspect of embodiments of the present application, there is provided a traffic control method, including:
carrying out face detection on an image acquired by a target camera; wherein the target camera is mounted on the traffic control device, the target camera is a single camera comprising at least two lenses, or the target camera is at least two cameras;
when a face is detected in an image acquired by the target camera, extracting face key point feature information and pairing to obtain key point pairs;
determining coordinates of the face under the coordinate system of the traffic control equipment based on the pixel coordinates of the key point pair, the calibration parameters of the target camera and the installation configuration parameters of the target camera; the calibration parameters of the camera comprise camera internal parameters and camera external parameters;
determining whether the position of the face belongs to a channel region corresponding to a target channel based on the coordinates of the face in the coordinate system of the traffic control equipment and the region range information of the target channel in the coordinate system of the traffic control equipment; wherein the target passage is a passage area associated with the passage control device.
According to a second aspect of embodiments of the present application, there is provided a traffic control device including:
the detection unit is used for carrying out face detection on the image acquired by the target camera; wherein the target camera is mounted on the traffic control device, the target camera is a single camera comprising at least two lenses, or the target camera is at least two cameras;
the extracting unit is used for extracting the feature information of key points of the human face and pairing the key points to obtain key point pairs when the human face is detected in the image acquired by the target camera;
the passing control unit is used for determining the coordinates of the face under the coordinate system of the passing control equipment based on the pixel coordinates of the key point pairs, the calibration parameters of the target camera and the installation configuration parameters of the target camera; the calibration parameters of the camera comprise camera internal parameters and camera external parameters;
the passing control unit is further configured to determine whether the face position belongs to a channel region corresponding to a target channel based on the coordinates of the face in the passing control device coordinate system and the region range information of the target channel in the passing control device coordinate system; wherein the target passage is a passage area associated with the passage control device.
According to a third aspect of the embodiments of the present application, there is provided a gate device, including a processor, a communication interface, a memory and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and a processor for implementing the traffic control method of the first aspect when executing the program stored in the memory.
According to a fourth aspect of embodiments of the present application, there is provided a machine-readable storage medium having stored therein a computer program which, when executed by a processor, implements the traffic control method of the first aspect.
According to a fifth aspect of the embodiments of the present application, a multiple gate system is provided, where the multiple gate system includes multiple gates, and each gate performs traffic control by using the traffic control method provided in the first aspect.
According to the traffic control method, the face detection is carried out on the image acquired by the target camera; when a face is detected in an image acquired by a target camera, extracting face key point characteristic information and pairing to obtain key point pairs; the method comprises the steps of determining the coordinates of a face under a coordinate system of a traffic control device based on the pixel coordinates of a key point pair, the calibration parameters of a target camera and the installation configuration parameters of the target camera, further determining whether the position of the face belongs to a channel region corresponding to a target channel based on the coordinates of the face under the coordinate system of the traffic control device and the region range information of the target channel under the coordinate system of the traffic control device, and realizing accurate positioning of the region where the face is located without deploying a special ranging hardware module.
Drawings
FIG. 1 is a flow chart diagram illustrating a traffic control method according to an exemplary embodiment of the present application;
fig. 2 is a schematic flow chart illustrating a process of determining coordinates of a human face in a coordinate system of a traffic control device according to an exemplary embodiment of the present application;
fig. 3 is a schematic flowchart illustrating a process of determining a region range where a face position belongs to a target channel according to an exemplary embodiment of the present application;
FIG. 4 is a schematic front end deployment view of a multiple gate system shown in an exemplary embodiment of the present application;
fig. 5A is a schematic diagram illustrating a flow of positioning a gate channel where a human face is located according to an exemplary embodiment of the present application;
FIG. 5B is a schematic flow chart illustrating a position calculation according to an exemplary embodiment of the present application;
FIG. 6 is a schematic diagram illustrating one type of triangulation in accordance with an exemplary embodiment of the present application;
fig. 7 is a schematic structural diagram of a traffic control device according to an exemplary embodiment of the present application;
fig. 8 is a schematic diagram illustrating a hardware structure of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In order to make those skilled in the art better understand the technical solutions provided by the embodiments of the present application, some technical terms related to the embodiments of the present application will be described below.
Internal reference of the camera: parameters related to the characteristics of the camera itself, such as the focal length, pixel size, etc. of the camera;
external reference of the camera: extrinsic parameters are parameters in the world coordinate system such as the position of the camera, the direction of rotation, etc. For a multi-lens camera, camera parameters may include positional relationships between two lenses, such as rotation and translation; for multiple cameras, camera external parameters may include positional relationships between the multiple cameras, such as rotation and translation.
In order to make the aforementioned objects, features and advantages of the embodiments of the present application more comprehensible, embodiments of the present application are described in detail below with reference to the accompanying drawings.
Referring to fig. 1, a flow chart of a traffic control method provided in an embodiment of the present application is schematically illustrated, where the traffic control method may be applied to a traffic control device, as shown in fig. 1, the traffic control method may include the following steps:
it should be noted that, in the embodiment of the present application, the traffic control device may include, but is not limited to, a gate or a door access, and the like, which needs to be controlled based on a face positioning result.
Illustratively, the gate is a single gate; for the multi-gate system, for each gate, the passing control can be performed respectively and independently according to the scheme provided by the embodiment of the application, namely, each gate in the multi-gate system is not linked.
S100, carrying out face detection on an image acquired by a target camera; wherein the target camera is mounted on the traffic control device, the target camera is a single camera including at least two lenses, or the target camera is at least two cameras.
In the embodiment of the present application, the target camera does not refer to a fixed camera, but may refer to any one or more cameras installed in the traffic control device for capturing human faces, and when the target camera is a single camera, the camera includes at least two lenses, that is, a binocular camera or a multi-view camera.
In the embodiment of the application, the background server can acquire the image of the associated area of the traffic control device through the target camera and perform face detection on the image acquired by the target camera.
For example, the background server may perform face detection on the image acquired by the target camera through a depth learning algorithm.
And step S110, when a human face is detected in the image acquired by the target camera, extracting the feature information of key points of the human face and pairing the key points to obtain key point pairs.
In the embodiment of the application, when the traffic control device detects a face in the image acquired by the target camera, face key points can be extracted from the images of different lenses of the target camera respectively, and the face key points in the images of different lenses can be matched.
For example, the images of different lenses for face key point extraction and matching are images acquired by different lenses of the target camera or a plurality of cameras included in the target camera at the same time (the time for acquiring the images is the same, or the time difference is within a preset error range).
For convenience of description and understanding, the target camera is taken as a binocular camera in the following description, that is, the traffic control device may respectively perform image acquisition through left and right lenses of the target camera, perform face detection on images acquired through the left and right lenses of the target camera, and when a face is detected, respectively perform face key point extraction and pairing on the images acquired through the left and right lenses to obtain a plurality of key point pairs (each key point pair includes one face key point extracted from the images acquired through the left and right lenses, respectively).
It should be noted that when there are more than three cameras, or a single camera is a three-view camera or more, the face key point pairing may be performed in a manner of matching two key points.
Taking a three-view camera as an example, for any one facial feature point, three sets of key point pairs can be corresponded.
And step S120, determining the coordinates of the face in the coordinate system of the traffic control equipment based on the pixel coordinates of the key point pairs, the calibration parameters of the target camera and the installation configuration parameters of the target camera.
For example, the calibration parameters of the camera may include camera internal parameters and camera external parameters.
The camera internal parameters are parameters related to the characteristics of the camera, such as the focal length, the pixel size and the like of the camera;
the camera external parameters are parameters of the camera in a world coordinate system, such as the position, the rotation direction and the like of the camera. For a multi-lens camera, camera parameters may include positional relationships between two lenses, such as rotation and translation; for multiple cameras, camera external parameters may include positional relationships between the multiple cameras, such as rotation and translation.
In the embodiment of the application, for any face key point, the passing control device can determine the pixel coordinates of the face key point in the image.
Illustratively, the pixel coordinates of the key point pair include pixel coordinates of each key point included in the key point pair in the belonging image.
For example, for a binocular camera, one key point pair includes two key points, one key point belongs to an image captured by a left lens, and one key point belongs to an image captured by a right lens, and the pixel coordinates of the key point pair respectively include the pixel coordinates of the key point in the image captured by the left lens, and the pixel coordinates of the key point in the image captured by the right lens.
The passing control device can determine the coordinates of the face in the coordinate system of the passing control device based on the pixel coordinates of the key point pairs, the calibration parameters of the target camera and the installation configuration parameters of the target camera.
For example, the installation configuration parameters of the camera may include an installation position and an installation angle (e.g., pitch angle, tilt angle) of the camera in the traffic control device.
Step S130, determining whether the face position belongs to a channel region corresponding to a target channel based on the coordinates of the face in the coordinate system of the traffic control device and the region range information of the target channel in the coordinate system of the traffic control device.
In the embodiment of the application, when the passing control device determines the coordinates of the face under the coordinate system of the passing control device, the face can be positioned based on the coordinates of the face under the coordinate system of the passing control device and the area range information of the target channel under the coordinate system of the passing control device, so as to determine whether the face position belongs to the channel area corresponding to the target channel.
Illustratively, the target passage is a passage area associated with the passage control device, namely a passage managed by the passage control device, and the passage control device controls the gate of the gate to be opened or closed based on whether the personnel in the passage area meet the passage condition.
For example, the area range information of the target passageway may be division information of a passageway area associated with the traffic control device, such as the width of the passageway area associated with the traffic control device (i.e., the target passageway), and the position of the boundary of the target passageway in the coordinate system of the traffic control device (which may be characterized by a straight line equation corresponding to the boundary of the target passageway).
For example, for any gate in a multi-gate system, the gate may determine the coverage area of the gate tunnel in the gate coordinate system based on the location of the left and right boundaries of the gate tunnel region in the gate coordinate system (which may be characterized by a straight line equation).
It can be seen that, in the flow of the method shown in fig. 1, through calibration of camera calibration parameters and camera installation configuration parameters, when a face is detected in an image acquired by a target camera, face key points are extracted and matched to obtain key point pairs, and based on pixel coordinates of the key point pairs, calibration parameters of the target camera and installation configuration parameters of the target camera, coordinates of the face under a coordinate system of a traffic control device are determined, and further, whether the face position belongs to a channel region corresponding to a target channel under the coordinate system of the traffic control device is determined, so that accurate positioning of the face region where the face is located is achieved without deploying a special distance measurement hardware module.
As a possible embodiment, as shown in fig. 2, in step S120, determining coordinates of the human face in the coordinate system of the traffic control device based on the pixel coordinates of the key point pair, the calibration parameters of the target camera, and the installation configuration parameters of the target camera may be implemented by the following steps:
step S121, for any key point pair, determining three-dimensional coordinates of the face characteristic point corresponding to the key point pair in a camera coordinate system based on the pixel coordinates of the key point pair and the calibration parameters of the target camera;
step S122, determining the coordinates of the face characteristic point in a coordinate system of the traffic control equipment based on the three-dimensional coordinates of the face characteristic point in the coordinate system of the camera and the installation configuration parameters of the target camera;
and S123, determining the position of the face under the coordinates of the traffic control equipment based on the coordinates of each face characteristic point under the traffic control coordinate system.
For example, when the key point pairs are determined in the manner described in step S110, for any key point pair, the three-dimensional coordinates of the face feature point corresponding to the key point pair in the camera coordinate system may be determined based on the pixel coordinates of the key point pair and the calibration parameters of the target camera.
Illustratively, the face feature points include actual face features such as pupils, corners of eyes, a nose, corners of a mouth, and the like, and the face key points refer to corresponding imaging features of the face feature points in the image.
In one example, the traffic control device may determine three-dimensional coordinates of the corresponding human face feature point of the key point pair in the camera coordinate system by triangulation based on the pixel coordinates of the key point pair and the calibration parameters of the target camera.
For example, the traffic control device may determine, based on the pixel coordinates of the key point pair and the camera parameters of the target camera, the coordinates of the face feature point corresponding to the key point in the undistorted normalized plane coordinate system, and determine, based on the coordinates of the face feature point in the undistorted normalized plane coordinate system and the camera parameters of the target camera, the three-dimensional coordinates of the face feature point in the camera coordinate system by triangulation.
For example, when there are more than three lenses or cameras, for any one face feature point, the three-dimensional coordinates of the face feature point in the camera coordinate system may be determined by triangulation based on the face key point of the face feature point in the images of any two lenses or cameras, so that the three-dimensional coordinates of the face feature point are obtained together, and further, the final three-dimensional coordinates of the face feature point in the camera coordinate system is determined based on the three-dimensional coordinates of the face feature point, thereby improving the positioning accuracy.
When the traffic control device determines the three-dimensional coordinates of the face feature point in the camera coordinate system, the coordinates of the face feature point in the traffic control device coordinate system may be determined based on the three-dimensional coordinates of the face feature point in the camera coordinate system and the installation configuration parameters of the target camera.
For example, the mapping relationship between the camera coordinate system of the target camera and the traffic control device coordinate system may be determined in advance based on the installation configuration parameters of the target camera, and when the three-dimensional coordinates of the face feature point in the camera coordinate system are determined, the coordinates of the face feature point in the traffic control device may be obtained based on the coordinates and the determined mapping relationship.
As a possible embodiment, as shown in fig. 3, in step S130, the traffic control device is a gate,
based on the coordinates of the face in the coordinate system of the traffic control device and the area range information of the target channel in the coordinate system of the traffic control device, whether the face position belongs to the channel area corresponding to the target channel or not is determined, and the method can be realized by the following steps:
step S131, determining the distance from the position of the human face to the center line of the gate channel region corresponding to the gate based on the coordinate of the human face in the gate coordinate system and the region range information of the target channel in the gate coordinate system.
And S132, when the distance from the face position to the central line of the gate channel area corresponding to the gate is smaller than a preset distance threshold value, determining that the face position belongs to the gate channel area corresponding to the gate.
And S133, when the distance from the face position to the central line of the gate passage area corresponding to the gate is greater than or equal to a preset distance threshold value, determining that the face position does not belong to the gate passage area corresponding to the gate.
For example, taking the traffic control device as a gate as an example, in order to more accurately position the gate channel where the face is located, when the coordinate of the face under the gate coordinate system is determined, the distance from the face to the centerline of the gate channel region corresponding to the gate may be determined, and the distance from the face position to the centerline of the gate channel region corresponding to the gate may be compared with a preset distance threshold.
And when the distance from the face position to the central line of the gate channel region corresponding to the gate is smaller than a preset distance threshold value, determining that the face position belongs to the gate channel region corresponding to the gate.
And when the distance from the face position to the central line of the gate passage area corresponding to the gate is greater than or equal to a preset distance threshold value, determining that the face position does not belong to the gate passage area corresponding to the gate.
In an example, after determining that the face position belongs to the gate passage area corresponding to the gate in step S132, the method may further include:
when the target person related to the face meets the passing condition, the gate of the gate machine is controlled to be opened, so that the target person passes.
For example, the traffic control may be implemented in the form of a white list or/and a black list.
For example, face information (i.e., a white list) of a person who is allowed to pass can be preconfigured, when the face position is determined to belong to a gate channel region corresponding to the gate, the current face information can be compared with the face information in the white list, if matched face information exists, a target person related to the face is determined to meet a passing condition, and a gate of the gate is controlled to be opened so that the target person passes; otherwise, the gate of the gate machine is refused to be opened so as to forbid the target personnel to pass.
For another example, face information (namely, a blacklist) of a person who is forbidden to pass can be configured in advance, when the face position is determined to belong to a gate channel region corresponding to the gate, the current face information can be compared with the face information in the blacklist, if matched face information does not exist, a target person related to the face is determined to meet a passing condition, and a gate of the gate is controlled to be opened so that the target person passes; otherwise, the gate of the gate machine is refused to be opened so as to forbid the target person to pass.
For another example, the white list and the black list may be configured respectively, when it is determined that the face position belongs to the gate channel region corresponding to the gate, the current face information may be compared with the face information in the black and white list, if there is matching face information in the white list, it is determined that the target person associated with the face meets the passing condition, and the gate of the gate is controlled to be opened, so that the target person passes; if the blacklist has the matched face information, the gate of the gate machine is refused to be opened so as to forbid the target person to pass; if no matched face information exists in the black and white list, the target person can be prompted to execute specified operation, such as card swiping or gate opening request initiated to a person with control authority, and when a specified card swiping instruction is detected, or a gate opening instruction initiated by the person with control authority is received, the gate of the gate machine is controlled to be opened, so that the target person passes through.
In order to enable those skilled in the art to better understand the technical solutions provided in the embodiments of the present application, the following describes the technical solutions provided in the embodiments of the present application with reference to application scenarios.
Referring to fig. 4, a schematic front-end deployment diagram of a multi-gate system provided in an embodiment of the present application is shown in fig. 4, where the multi-gate system may include a plurality of gates (gates A, B and C are shown as an example) disposed in parallel, each gate is provided with a corresponding RGB-IR (color-infrared) camera (i.e., an RGB-IR camera is an example of a target camera), each gate operates independently, i.e., there is no linkage between the gates, and each gate controls the gate to open or close independently.
Based on the multi-gate system shown in fig. 4, for any gate in the multi-gate system, the implementation flow of the positioning method provided by the embodiment of the application is as follows:
as shown in fig. 5A, in order to position the gate channel where the face is located, the following operations need to be performed:
1. calibrating internal and external parameters of the RGB-IR camera;
2. acquiring installation configuration parameters of the camera-gate (including installation position, installation angle, gate width, spacing between gates and the like of the camera);
3. respectively carrying out face detection on RGB images and IR images of an RGB-IR camera through a deep learning face detection algorithm, and extracting face key point characteristic information for matching when a face is detected;
4. and calculating a gate channel where the face is located based on the key point pair information, internal and external parameters of the RGB-IR camera and camera-gate installation configuration parameters.
As shown in fig. 5B, for the above steps 3 and 4, the specific implementation flow is as follows:
extracting high-precision human face key point matching pairs (human face key point pairs in images acquired by a left lens and a right lens) by using a deep learning algorithm;
converting the pixel coordinates (two-dimensional coordinates) of the key points into the undistorted normalized plane coordinates (two-dimensional coordinates) based on the key point information and the internal and external parameters of the RGB-IR camera, namely respectively performing distortion correction on the RGB image and the IR image of the RGB-IR camera, determining the pixel coordinates of the key points in the RGB image in the distortion-corrected RGB image and the pixel coordinates of the key points in the IR image in the distortion-corrected IR image, and determining the coordinates (three-dimensional coordinates) of the human characteristic points corresponding to the key point pairs in the camera coordinate system (the coordinate system established by taking one of the optical centers of the RGB-IR camera as the coordinate) of the face-IR camera by using a triangulation method based on the pixel coordinates of the key points in the RGB image in the distortion-corrected RGB image and the pixel coordinates of the key points in the distortion-corrected IR image, i.e. the distance of the feature point to the camera optical center (the optical center being the origin of coordinates) is determined.
For example, please refer to FIG. 6, suppose OLAnd ORThe optical centers of the left and right lenses, respectively, P is a characteristic point of a certain face, PlAnd prThe imaging positions of the feature point P in the images acquired by the left and right lenses, respectively.
Since the focal lengths (i.e., the distances from the optical centers of the left and right lenses to the imaging planes of the left and right lenses) and the pixel sizes (i.e., the actual size corresponding to one pixel) of the left and right lenses are known (determined by camera reference calibration), the straight line OLP angle α to the image plane of the left lens, and line ORThe angle β between P and the image plane of the right lens can be obtained.
Exemplarily, tan α ═ f1/L1, tan β ═ f2/L2, f1 and f2 are focal lengths of left and right cameras respectively, and L1 is plDistance from left imaging plane center position (which may be based on p)lPixel coordinates of the center position of the left imaging plane, and pixel size determination) (ii) a L2 is prDistance from the center position of the right imaging plane (which may be based on p)rPixel coordinates of the center position of the right imaging plane, and pixel size determination).
Due to line segment OLORLength of (a), and OLLine segment O and line connecting the center of the left imaging planeLORAngle of (a) ORLine segment O and line connecting the center of the right imaging planeLORIs known (determined by camera external reference calibration), and therefore, the line OLP and line segment OLORAngle of (A) and straight line ORP and line segment OLORCan be known, and then, the line segment OLP and ORP can be known, i.e. the coordinates of the feature point P in the camera coordinate system can be known.
And (3) based on the coordinates of each face characteristic point in the camera coordinate system, determining the coordinates of the face in the camera coordinate system through filtering (namely, correcting or removing the coordinates of the face characteristic points with obvious deviation).
For example, the coordinates of the face in the camera coordinate system may be represented by the filtered coordinates of the plurality of feature points, or the filtered coordinates of the plurality of feature points are normalized or averaged to obtain the coordinates of a single feature point, and are represented by the coordinates of the single feature point.
When the coordinates of the face under a camera coordinate system are determined, the coordinates of the face under a gate machine coordinate system can be determined based on the coordinates and installation configuration parameters of a camera-gate machine, the distance from the face to the center line of a gate machine channel region corresponding to the gate machine is determined based on the coordinates, when the distance is smaller than a preset distance threshold value, the face position is determined to belong to the gate machine channel region corresponding to the gate machine, and when a target person related to the face meets a passing condition, a gate of the gate machine is controlled to be opened so that the target person passes; and when the distance is greater than or equal to a preset distance threshold value, determining that the face position does not belong to a gate passage area corresponding to the gate, and refusing to control the gate of the gate to be opened.
Therefore, the positioning method provided by the embodiment of the application can be directly multiplexed to the current RGB-IR stock equipment without adding extra ranging hardware modules (such as structured light and TOF) and without adding strict calibration (such as epipolar alignment), and only needs to carry out ordinary dual-camera internal and external reference calibration, so that the cost is low, and the popularization is strong;
in addition, compared with a scheme of realizing multi-channel positioning based on the position and the size of a face frame of a single camera, the positioning method provided by the embodiment of the application is more accurate, and has higher robustness under the conditions of side faces, long distance and the like.
The methods provided herein are described above. The following describes the apparatus provided in the present application:
referring to fig. 7, a schematic structural diagram of a traffic control device according to an embodiment of the present application is shown in fig. 7, where the traffic control device may include:
a detection unit 710, configured to perform face detection on an image acquired by a target camera; wherein the target camera is mounted on the traffic control device, the target camera is a single camera comprising at least two lenses, or the target camera is a plurality of cameras;
an extracting unit 720, configured to, when a face is detected in the image acquired by the target camera, extract face key point feature information and perform pairing to obtain a key point pair;
a passing control unit 730, configured to determine coordinates of a human face in a coordinate system of the passing control device based on the pixel coordinates of the key point pair, the calibration parameters of the target camera, and the installation configuration parameters of the target camera; the calibration parameters of the camera comprise camera internal parameters and camera external parameters;
the passing control unit 730 is further configured to determine whether the face position belongs to a channel region corresponding to the target channel based on the coordinates of the face in the passing control device coordinate system and the region range information of the target channel in the passing control device coordinate system; wherein the target passage is a passage area associated with the passage control device.
In a possible embodiment, the passing control unit 730 determines coordinates of the human face in the passing control device coordinate system based on the pixel coordinates of the key point pair, the calibration parameters of the target camera and the installation configuration parameters of the target camera, including:
for any key point pair, determining the three-dimensional coordinates of the human face characteristic points corresponding to the key point pair in a camera coordinate system based on the pixel coordinates of the key point pair and the calibration parameters of the target camera;
determining the coordinates of the face characteristic point in the coordinate system of the traffic control equipment based on the three-dimensional coordinates of the face characteristic point in the coordinate system of the camera and the installation configuration parameters of the target camera;
and determining the position of the face under the coordinate system of the traffic control equipment based on the coordinates of the face characteristic points under the coordinate system of the traffic control equipment.
In a possible embodiment, the passing control unit 730 determines three-dimensional coordinates of the face feature point corresponding to the key point pair in the camera coordinate system based on the pixel coordinates of the key point pair and the calibration parameters of the target camera, including:
determining the coordinates of the facial feature points corresponding to the key points under the undistorted normalized plane coordinate system based on the pixel coordinates of the key point pairs and the camera internal parameters of the target camera;
and determining the coordinates of the face characteristic point in the camera coordinate system by utilizing a triangulation mode based on the coordinates of the face characteristic point in the undistorted normalized plane coordinate system and the camera external parameters of the target camera.
In a possible embodiment, the traffic control device is a gate,
the passing control unit 730 determines whether the face position belongs to a channel region corresponding to the target channel based on the coordinates of the face in the passing control device coordinate system and the region range information of the target channel in the passing control device coordinate system, including:
determining the distance from the position of the human face to the center line of the gate channel region corresponding to the gate based on the coordinates of the human face in the gate coordinate system and the region range information of the target channel in the gate coordinate system;
when the distance from the face position to the central line of the gate channel area corresponding to the gate is smaller than a preset distance threshold value, determining that the face position belongs to the gate channel area corresponding to the gate;
and when the distance from the face position to the central line of the gate passage area corresponding to the gate is greater than or equal to a preset distance threshold value, determining that the face position does not belong to the gate passage area corresponding to the gate.
In a possible embodiment, after the passing control unit 730 determines that the face position belongs to the gate passage area corresponding to the gate, the method further includes:
and when the target person related to the face meets the passing condition, controlling the gate of the gate machine to be opened so as to enable the target person to pass.
In one possible embodiment, the installation configuration parameters include an installation position and an installation angle of the subject camera in the traffic control device.
Please refer to fig. 8, which is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure. The electronic device may be a gate device, or alternatively, a door access device, which may include a processor 801, a communication interface 802, a memory 803, and a communication bus 804. The processor 801, the communication interface 802, and the memory 803 communicate with each other via a communication bus 804. Among them, the memory 803 stores a computer program; the processor 801 may execute the above-described traffic control method by executing a program stored on the memory 803.
The memory 803 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the memory 802 may be: RAM (random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, dvd, etc.), or similar storage medium, or a combination thereof.
The present embodiment also provides a machine-readable storage medium, such as the memory 803 in fig. 8, storing a computer program, which can be executed by the processor 801 in the electronic device shown in fig. 8 to implement the traffic control method described above.
Embodiments of the present application also provide a computer program, which is stored in a machine-readable storage medium, such as the memory 803 in fig. 8, and when executed by the processor, causes the processor 801 to execute the traffic control method described above.
The embodiment of the application also provides a multi-gate system, which comprises a plurality of gates, wherein each gate adopts the traffic control method described above to perform traffic control.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (14)

1. A traffic control method, comprising:
carrying out face detection on an image acquired by a target camera; wherein the target camera is mounted on the traffic control device, the target camera is a single camera comprising at least two lenses, or the target camera is at least two cameras;
when a face is detected in an image acquired by the target camera, extracting face key point feature information and pairing to obtain key point pairs;
determining coordinates of the face under the coordinate system of the traffic control equipment based on the pixel coordinates of the key point pair, the calibration parameters of the target camera and the installation configuration parameters of the target camera; the calibration parameters of the camera comprise camera internal parameters and camera external parameters;
determining whether the position of the face belongs to a channel region corresponding to a target channel based on the coordinates of the face in the coordinate system of the traffic control equipment and the region range information of the target channel in the coordinate system of the traffic control equipment; wherein the target passage is a passage area associated with the passage control device.
2. The method of claim 1, wherein the determining coordinates of the human face in a coordinate system of a traffic control device based on the pixel coordinates of the key point pairs, the calibration parameters of the target camera and the installation configuration parameters of the target camera comprises:
for any key point pair, determining the three-dimensional coordinates of the human face characteristic points corresponding to the key point pair in a camera coordinate system based on the pixel coordinates of the key point pair and the calibration parameters of the target camera;
determining the coordinates of the face characteristic point under a coordinate system of the traffic control equipment based on the three-dimensional coordinates of the face characteristic point under a camera coordinate system and the installation configuration parameters of the target camera;
and determining the position of the face under the coordinate system of the traffic control equipment based on the coordinates of the face characteristic points under the coordinate system of the traffic control equipment.
3. The method of claim 2, wherein determining three-dimensional coordinates of the face feature point corresponding to the key point pair in a camera coordinate system based on the pixel coordinates of the key point pair and calibration parameters of the target camera comprises:
determining the coordinates of the facial feature points corresponding to the key points under the undistorted normalized plane coordinate system based on the pixel coordinates of the key point pairs and the camera internal parameters of the target camera;
and determining the three-dimensional coordinates of the face characteristic point in the camera coordinate system by utilizing a triangulation mode based on the coordinates of the face characteristic point in the undistorted normalized plane coordinate system and the camera external parameters of the target camera.
4. The method according to any one of claims 1 to 3, wherein the traffic control device is a gate,
the determining whether the face position belongs to a channel region corresponding to the target channel based on the coordinates of the face in the coordinate system of the traffic control device and the region range information of the target channel in the coordinate system of the traffic control device includes:
determining the distance from the position of the human face to the center line of the gate channel region corresponding to the gate based on the coordinates of the human face in the gate coordinate system and the region range information of the target channel in the gate coordinate system;
when the distance from the face position to the central line of the gate channel area corresponding to the gate is smaller than a preset distance threshold value, determining that the face position belongs to the gate channel area corresponding to the gate;
and when the distance from the face position to the central line of the gate passage area corresponding to the gate is greater than or equal to a preset distance threshold value, determining that the face position does not belong to the gate passage area corresponding to the gate.
5. The method according to claim 4, wherein after determining that the face position belongs to a gate passage area corresponding to the gate, the method further comprises:
and when the target person related to the face meets the passing condition, controlling the gate of the gate machine to be opened so as to enable the target person to pass.
6. The method of any of claims 1-3, wherein the installation configuration parameters include an installation location and an installation angle of the subject camera in the traffic control device.
7. A traffic control device, comprising:
the detection unit is used for carrying out face detection on the image acquired by the target camera; wherein the target camera is mounted on the traffic control device, the target camera is a single camera comprising at least two lenses, or the target camera is at least two cameras;
the extracting unit is used for extracting the feature information of key points of the human face and pairing the key points to obtain key point pairs when the human face is detected in the image acquired by the target camera;
the passing control unit is used for determining the coordinates of the human face under the coordinate system of the passing control equipment based on the pixel coordinates of the key point pairs, the calibration parameters of the target camera and the installation configuration parameters of the target camera; the calibration parameters of the camera comprise camera internal parameters and camera external parameters;
the passing control unit is further configured to determine whether the face position belongs to a channel region corresponding to a target channel based on the coordinates of the face in the passing control device coordinate system and the region range information of the target channel in the passing control device coordinate system; wherein the target passage is a passage area associated with the passage control device.
8. The apparatus of claim 7, wherein the traffic control unit determines coordinates of the human face in a traffic control device coordinate system based on the pixel coordinates of the key point pairs, the calibration parameters of the target camera and the installation configuration parameters of the target camera, and comprises:
for any key point pair, determining the three-dimensional coordinates of the human face characteristic points corresponding to the key point pair in a camera coordinate system based on the pixel coordinates of the key point pair and the calibration parameters of the target camera;
determining the coordinates of the face characteristic point in the coordinate system of the traffic control equipment based on the three-dimensional coordinates of the face characteristic point in the coordinate system of the camera and the installation configuration parameters of the target camera;
and determining the position of the face under the coordinate system of the traffic control equipment based on the coordinates of the face characteristic points under the coordinate system of the traffic control equipment.
9. The apparatus of claim 8, wherein the traffic control unit determines three-dimensional coordinates of the face feature point corresponding to the key point pair in a camera coordinate system based on the pixel coordinates of the key point pair and the calibration parameters of the target camera, and comprises:
determining the coordinates of the facial feature points corresponding to the key points under the undistorted normalized plane coordinate system based on the pixel coordinates of the key point pairs and the camera internal parameters of the target camera;
and determining the three-dimensional coordinates of the face characteristic point in the camera coordinate system by utilizing a triangulation mode based on the coordinates of the face characteristic point in the undistorted normalized plane coordinate system and the camera external parameters of the target camera.
10. The apparatus according to any one of claims 7 to 9, wherein the traffic control device is a gate,
the passing control unit determines whether the face position belongs to a passage area corresponding to a target passage based on the coordinates of the face in the passing control equipment coordinate system and the area range information of the target passage in the passing control equipment coordinate system, and the determination comprises the following steps:
determining the distance from the position of the human face to the center line of the gate channel region corresponding to the gate based on the coordinates of the human face in the gate coordinate system and the region range information of the target channel in the gate coordinate system;
when the distance from the face position to the central line of the gate channel area corresponding to the gate is smaller than a preset distance threshold value, determining that the face position belongs to the gate channel area corresponding to the gate;
and when the distance from the face position to the central line of the gate passage area corresponding to the gate is greater than or equal to a preset distance threshold value, determining that the face position does not belong to the gate passage area corresponding to the gate.
11. The apparatus according to claim 10, wherein after the passing control unit determines that the face position belongs to a gate passage area corresponding to the gate, the apparatus further comprises:
when the target person related to the face meets the passing condition, the gate of the gate machine is controlled to be opened, so that the target person passes.
12. The apparatus of any of claims 7-9, wherein the installation configuration parameters include an installation location and an installation angle of the subject camera in the traffic control device.
13. The gate equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method of any one of claims 1 to 6 when executing a program stored in the memory.
14. A multiple gate system comprising a plurality of gates, each gate being traffic controlled using the method of any of claims 1-6.
CN202010556457.6A 2020-06-17 2020-06-17 Traffic control method and device, gate equipment and multi-gate system Active CN111784885B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010556457.6A CN111784885B (en) 2020-06-17 2020-06-17 Traffic control method and device, gate equipment and multi-gate system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010556457.6A CN111784885B (en) 2020-06-17 2020-06-17 Traffic control method and device, gate equipment and multi-gate system

Publications (2)

Publication Number Publication Date
CN111784885A true CN111784885A (en) 2020-10-16
CN111784885B CN111784885B (en) 2023-06-27

Family

ID=72756676

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010556457.6A Active CN111784885B (en) 2020-06-17 2020-06-17 Traffic control method and device, gate equipment and multi-gate system

Country Status (1)

Country Link
CN (1) CN111784885B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112380965A (en) * 2020-11-11 2021-02-19 浙江大华技术股份有限公司 Method for face recognition and multi-view camera
CN112530059A (en) * 2020-11-24 2021-03-19 厦门熵基科技有限公司 Channel gate inner draw-bar box judgment method, device, equipment and storage medium
CN112735001A (en) * 2020-12-10 2021-04-30 南京熊猫电子股份有限公司 Mobile terminal gating system and method based on machine interaction technology
CN113610051A (en) * 2021-08-26 2021-11-05 合众新能源汽车有限公司 Face ranging method, device and computer readable medium based on face registration
CN113626797A (en) * 2021-08-09 2021-11-09 杭州海康威视数字技术股份有限公司 Method for reducing false triggering of multi-channel gate and gate system
CN113673473A (en) * 2021-08-31 2021-11-19 浙江大华技术股份有限公司 Gate control method and device, electronic equipment and storage medium
CN117392585A (en) * 2023-10-24 2024-01-12 广州广电运通智能科技有限公司 Gate traffic detection method and device, electronic equipment and storage medium
CN117994890A (en) * 2024-03-29 2024-05-07 杭州海康威视数字技术股份有限公司 Access control system, method and access control equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108012083A (en) * 2017-12-14 2018-05-08 深圳云天励飞技术有限公司 Face acquisition method, device and computer-readable recording medium
CN108564041A (en) * 2018-04-17 2018-09-21 广州云从信息科技有限公司 A kind of Face datection and restorative procedure based on RGBD cameras
CN109657580A (en) * 2018-12-07 2019-04-19 南京高美吉交通科技有限公司 A kind of urban track traffic gate passing control method
CN109671190A (en) * 2018-11-27 2019-04-23 杭州天翼智慧城市科技有限公司 A kind of multi-pass barrier gate device management method and system based on recognition of face
CN110379050A (en) * 2019-06-06 2019-10-25 上海学印教育科技有限公司 A kind of gate control method, apparatus and system
CN110390745A (en) * 2019-06-03 2019-10-29 浙江大华技术股份有限公司 Gate control method, system, readable storage medium storing program for executing and equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108012083A (en) * 2017-12-14 2018-05-08 深圳云天励飞技术有限公司 Face acquisition method, device and computer-readable recording medium
CN108564041A (en) * 2018-04-17 2018-09-21 广州云从信息科技有限公司 A kind of Face datection and restorative procedure based on RGBD cameras
CN109671190A (en) * 2018-11-27 2019-04-23 杭州天翼智慧城市科技有限公司 A kind of multi-pass barrier gate device management method and system based on recognition of face
CN109657580A (en) * 2018-12-07 2019-04-19 南京高美吉交通科技有限公司 A kind of urban track traffic gate passing control method
CN110390745A (en) * 2019-06-03 2019-10-29 浙江大华技术股份有限公司 Gate control method, system, readable storage medium storing program for executing and equipment
CN110379050A (en) * 2019-06-06 2019-10-25 上海学印教育科技有限公司 A kind of gate control method, apparatus and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张燕翔: "《新媒体科普概论》", 31 May 2020 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112380965A (en) * 2020-11-11 2021-02-19 浙江大华技术股份有限公司 Method for face recognition and multi-view camera
CN112380965B (en) * 2020-11-11 2024-04-09 浙江大华技术股份有限公司 Face recognition method and multi-camera
CN112530059A (en) * 2020-11-24 2021-03-19 厦门熵基科技有限公司 Channel gate inner draw-bar box judgment method, device, equipment and storage medium
CN112530059B (en) * 2020-11-24 2022-07-05 厦门熵基科技有限公司 Channel gate inner draw-bar box judgment method, device, equipment and storage medium
CN112735001A (en) * 2020-12-10 2021-04-30 南京熊猫电子股份有限公司 Mobile terminal gating system and method based on machine interaction technology
CN113626797A (en) * 2021-08-09 2021-11-09 杭州海康威视数字技术股份有限公司 Method for reducing false triggering of multi-channel gate and gate system
CN113610051A (en) * 2021-08-26 2021-11-05 合众新能源汽车有限公司 Face ranging method, device and computer readable medium based on face registration
CN113610051B (en) * 2021-08-26 2023-11-17 合众新能源汽车股份有限公司 Face ranging method, equipment and computer readable medium based on face registration
CN113673473A (en) * 2021-08-31 2021-11-19 浙江大华技术股份有限公司 Gate control method and device, electronic equipment and storage medium
CN117392585A (en) * 2023-10-24 2024-01-12 广州广电运通智能科技有限公司 Gate traffic detection method and device, electronic equipment and storage medium
CN117994890A (en) * 2024-03-29 2024-05-07 杭州海康威视数字技术股份有限公司 Access control system, method and access control equipment

Also Published As

Publication number Publication date
CN111784885B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
CN111784885B (en) Traffic control method and device, gate equipment and multi-gate system
CN111780673B (en) Distance measurement method, device and equipment
CN107980138B (en) False alarm obstacle detection method and device
US20180240265A1 (en) Systems and Methods for Depth-Assisted Perspective Distortion Correction
CN101527046B (en) Motion detection method, device and system
CN107316326B (en) Edge-based disparity map calculation method and device applied to binocular stereo vision
CN113196007B (en) Camera system applied to vehicle
CN110956114A (en) Face living body detection method, device, detection system and storage medium
WO2017138245A1 (en) Image processing device, object recognition device, device control system, and image processing method and program
CN110503760B (en) Access control method and access control system
CN109426277B (en) Method and device for planning movement track
KR102103944B1 (en) Distance and position estimation method of autonomous vehicle using mono camera
WO2020154990A1 (en) Target object motion state detection method and device, and storage medium
CN111626086A (en) Living body detection method, living body detection device, living body detection system, electronic device, and storage medium
CN110673607B (en) Feature point extraction method and device under dynamic scene and terminal equipment
CN111382607B (en) Living body detection method, living body detection device and face authentication system
CN105335934A (en) Disparity map calculating method and apparatus
CN116977446A (en) Multi-camera small target identification and joint positioning method and system
KR101501531B1 (en) Stereo Vision-based Pedestrian Detection System and the method of
KR101414158B1 (en) Apparatus and methdo for identifying face
CN111383255A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110176037A (en) A kind of target range Method of fast estimating driven for outdoor road auxiliary
CN109784315B (en) Tracking detection method, device and system for 3D obstacle and computer storage medium
KR101470939B1 (en) A system and the method for detecting face using CCTV
CN112232121A (en) Living body detection method, apparatus, device, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant