WO2019157989A1 - Biological feature 3d data acquisition method and biological feature 3d data recognition method - Google Patents

Biological feature 3d data acquisition method and biological feature 3d data recognition method Download PDF

Info

Publication number
WO2019157989A1
WO2019157989A1 PCT/CN2019/074455 CN2019074455W WO2019157989A1 WO 2019157989 A1 WO2019157989 A1 WO 2019157989A1 CN 2019074455 W CN2019074455 W CN 2019074455W WO 2019157989 A1 WO2019157989 A1 WO 2019157989A1
Authority
WO
WIPO (PCT)
Prior art keywords
biometric
information
feature
data
visible light
Prior art date
Application number
PCT/CN2019/074455
Other languages
French (fr)
Chinese (zh)
Inventor
左忠斌
左达宇
Original Assignee
左忠斌
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201810152242.0A external-priority patent/CN108446597B/en
Priority claimed from CN201810211276.2A external-priority patent/CN108416312B/en
Application filed by 左忠斌 filed Critical 左忠斌
Publication of WO2019157989A1 publication Critical patent/WO2019157989A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • the invention relates to the field of biometric identification technology, in particular to a biometric 3D data acquisition method and a recognition method.
  • a biological characteristic is an inherent physiological or behavioral characteristic of a living being, such as a fingerprint, a palm print, an iris, or a human face.
  • Biometrics have certain uniqueness and stability, that is, the difference between certain biological characteristics of any two organisms is relatively large, and biological characteristics generally do not change greatly with time, which makes biometrics suitable for application. In scenarios such as authentication information in an authentication or identification system.
  • the current biometric data is 2D data of the spatial plane.
  • the data application of the head and face is stuck in a simple image application, that is, only from a certain angle.
  • the head and face data are processed, identified and applied in other aspects.
  • the 2D method is mainly used to identify the characteristics of one or several hands, and some illegal molecules are collected according to the hand. 2D pictures, imitation of 2D hand features, deceived part of the identification system, bringing a great security risk to personal information security.
  • the present invention has been made in order to provide a visible light photographing based biometric 3D data recognizing method that overcomes the above problems or at least partially solves the above problems.
  • the invention provides a biometric 3D data acquisition method, comprising:
  • start device After turning on the power switch, start the power management module to supply power to each module of the system, and simultaneously start the camera matrix, central control module, shadowless lighting system and display module;
  • B. Human hand placement Place the human hand on the transparent glass cover. By adjusting the position of the hand, the information of the hand is all within the orientation of the information collection. Due to the use of the shadowless lighting system, the angles are collected. The hand information is not shaded; the device includes a virtual position of the hand, providing a description of the placement position of the human hand, ensuring that the entire human hand falls within the range of camera matrix information collection;
  • the camera matrix is started to start the information of the opponent's part, and the collected information will be transmitted to the central control module for analysis and processing in the form of pictures; the camera matrix composed of multiple visible light cameras will be used for biometrics. Information is collected to obtain multiple biometric images;
  • the signal collected by the camera matrix is transmitted to the central control module for signal processing, and the feature point cloud data of the biometric feature is generated based on the extracted feature points of the plurality of biometric images, including: according to the extracted Characterizing the respective feature points in the plurality of biometric images, performing feature point matching, establishing a matching feature point data set; calculating the respective cameras relative to the living body by using the beam adjustment method according to the optical information of the plurality of visible light cameras Calculating the relative position of the feature in space, and calculating spatial depth information of the feature points in the plurality of biometric images according to the relative position; generating biometrics according to the matched feature point data set and the spatial depth information of the feature points Feature point cloud data;
  • the fingertip fingerprint feature also has the unique biometric feature. Therefore, after collecting the feature points of the hand finger, the non-finger end information needs to be adopted first.
  • the algorithm's method is filtered out. The overall idea of the whole algorithm is as follows:
  • Constructing a 3D model of the biometric according to the feature point cloud data to implement acquisition of the biometric 3D data comprising: setting a reference size of the 3D model to be constructed; and calculating a space according to the reference size and the feature point cloud data Position information, determining a spatial size of each feature point in the feature point cloud data, thereby constructing a 3D model of the biometric feature;
  • the time data of biometric information collected by multiple visible light cameras is recorded, and a 3D model of biometrics with time dimension is constructed according to the feature point cloud data and time data to realize the collection of biometric four-dimensional data.
  • the features of the respective feature points in the plurality of biometric images are described by using a scale invariant feature transform SIFT feature descriptor.
  • the spatial depth information of the feature points in the plurality of biometric images includes: spatial location information and color information.
  • the 3D model of the biometric includes at least one of the following 3D data:
  • the method before acquiring the biometric information by using a camera matrix composed of multiple visible light cameras, the method further comprises: arranging the multiple visible light cameras by:
  • a plurality of visible light cameras are disposed on the curved load bearing structure.
  • the support structure is a cabinet
  • the arc-shaped load-bearing structure is disposed in the cabinet, and the method further includes:
  • the camera matrix hand-held information composed of a plurality of visible light cameras disposed on the curved load-bearing structure is used for acquisition.
  • it also includes:
  • the camera parameters of each camera are set through the display interface.
  • the multi-vision visual depth calculation method which specifically includes:
  • the biometric information is collected by using a camera matrix composed of multiple visible light cameras to obtain a plurality of biometric images;
  • the image information of the plurality of biometric images is allocated to a block block of the GPU for calculation, and combined with the centralized scheduling and allocation function of the CPU, the feature points of the plurality of biometric images are calculated.
  • the GPU is a dual GPU, and each GPU has multiple blocks.
  • the invention also provides a biometric 3D data identification method, comprising the following steps:
  • biometric 3D data (T1, T2...Tn) of the target organism Collecting biometric 3D data (T1, T2...Tn) of the target organism, and using the identity information (I1, I2...In) of the target organism to find biometric 3D data stored in the database (D1, D2...Dn) Correlating the biometric 3D data (T1, T2...Tn) of the target organism with the biometric 3D data (D1, D2...Dn) stored in the corresponding database to identify the target organism identity of;
  • the comparison method comprises the following specific steps: performing feature point fitting based on spatial direct matching method, and selecting three or more feature points as matching key points in the corresponding rigid regions of the two point clouds, through coordinate transformation Directly perform feature point corresponding matching; given initial coarse alignment conditions of two point clouds, seek rigid transformation between the two to minimize alignment error;
  • the least squares method is used to calculate the similarity.
  • step S01 further includes:
  • a 3D model of the biometric is constructed according to the feature point cloud data to implement biometric 3D data acquisition.
  • the step of generating feature point cloud data of the biometric based on the extracted feature points in the plurality of biometric images further includes:
  • the feature point cloud data of the biometric is generated according to the matched feature point data set and the spatial depth information of the feature point.
  • the feature of each feature point in the plurality of biometric images is described by using a scale invariant feature transform SIFT feature descriptor;
  • the relative position of each visible light camera relative to the biological features is calculated by the beam adjustment method.
  • the spatial depth information of the feature points in the plurality of biometric images includes: spatial location information and color information.
  • the step of constructing a 3D model of the biometric according to the feature point cloud data further includes:
  • the 3D model of the biometric includes at least one of the following 3D data:
  • a plurality of visible light cameras are used to form a camera matrix to collect biometric information of the living body, and the camera matrix is arranged by:
  • a plurality of visible light cameras are disposed on the curved load bearing structure.
  • the living body is a human body
  • the identity information includes one or more of a name, a gender, an age, and a document number.
  • the document number includes one or more of an ID number, a passport number, a driver's license number, a social security number, or a military officer number.
  • the biometric information is head information, facial information, and/or iris information
  • the method further includes:
  • the head information, facial information, and/or iris information of the human body is collected using a camera matrix composed of a plurality of visible light cameras disposed on the curved load bearing structure.
  • a display is disposed on the curved carrying structure
  • the photographing parameters of each visible light camera are set through the display interface.
  • the embodiment of the invention provides a biometric 3D data recognition method and system based on visible light photographing.
  • the biometric information is collected by using a camera matrix composed of multiple visible light cameras to obtain a plurality of biometric images; Processing a plurality of biometric images to extract respective feature points in the plurality of biometric images; and subsequently generating feature point cloud data of the biometrics based on the respective feature points in the extracted plurality of biometric images; Point cloud data constructs a 3D model of biometrics to enable acquisition of biometric 3D data. It can be seen that the embodiment of the present invention uses multiple visible light camera control technologies to collect biometric information, which can significantly improve the collection efficiency of biometric information.
  • the embodiment of the present invention utilizes the feature information of the biometrics collected in space.
  • the complete restoration of the spatial characteristics of biometrics provides unlimited possibilities for the subsequent application of biometric data.
  • the 3D data is identified by identifying the target identity information, and the target person's data does not need to be compared with the massive data in the database one by one, thereby improving the efficiency of the comparison recognition, greatly improving the speed of the identification, and adopting the sky-based direct matching-based Tianmu.
  • the point cloud fitting method is used to fit the feature points, and the fast fitting comparison of the biometric points is realized, thereby realizing the identity identification and identification.
  • FIG. 1 is a flow chart showing a biometric 3D data recognition method based on visible light photographing according to an embodiment of the invention
  • FIG. 2 is a flow chart showing a biometric 3D data acquisition method based on visible light photographing according to an embodiment of the invention
  • FIG. 3 shows a schematic diagram of a 3D data recognition system for head information, face information, and/or iris information, in accordance with an embodiment of the present invention
  • FIG. 4 is a schematic diagram showing the connection of an internal module and an external connection of a bearer structure in the 3D data identification system shown in FIG. 3;
  • FIG. 5 is a schematic diagram showing the connection of a serial port integration module, a camera matrix, and a central processing module in the 3D data identification system shown in FIG. 3;
  • FIG. 6 shows a schematic diagram of a 3D data identification system device in accordance with another embodiment of the present invention.
  • FIG. 7 is a block diagram showing the structure of a 3D data collection device according to an embodiment of the present invention.
  • FIG. 8 is a block diagram showing the structure of a 3D data collection device according to another embodiment of the present invention.
  • FIG. 1 is a flow chart showing a biometric 3D data recognition method based on visible light photographing according to an embodiment of the present invention:
  • biometric 3D data (T1, T2...Tn) of the target organism Collect biometric 3D data (T1, T2...Tn) of the target organism, and use the identity information (I1, I2...In) of the target organism to find the biometric 3D data (D1, D2...Dn) stored in the database, and target the target
  • the biometric 3D data (T1, T2...Tn) of the organism are respectively compared with the biometric 3D data (D1, D2...Dn) stored in the corresponding database to identify the identity of the target organism.
  • the collecting of the biometric information in step S01 may further include the following steps S102 to S108.
  • Step S102 using multiple visible light cameras to collect biometric information to obtain a plurality of biometric images.
  • multiple visible light cameras form a camera matrix to collect the living body;
  • Step S104 processing a plurality of biometric images to extract respective feature points in the plurality of biometric images
  • Step S106 generating feature point cloud data of the biometric feature based on the respective feature points in the extracted plurality of biometric images
  • Step S108 Construct a 3D model of the biometric feature according to the feature point cloud data to implement the collection of the biometric 3D data of the living body.
  • the collection of biometric information is performed by using multiple visible light camera control technologies, and the collection efficiency of the biometric information can be significantly improved.
  • the embodiment of the present invention utilizes the collected feature information of the biometrics in space to completely restore the biometric features.
  • the various features in space provide unlimited possibilities for the subsequent application of biometric data.
  • a camera can be used for biometric information collection. At this time, the camera can be rotated one turn along a predetermined track, thereby realizing multi-angle shooting of biometric information. Biometric image.
  • the feature point cloud data of the biometric feature is generated based on the respective feature points in the extracted plurality of biometric images in the step S106, which may include the following steps S1061 to S1063.
  • Step S1061 Perform matching of feature points according to characteristics of respective feature points in the extracted plurality of biometric images, and establish a matching feature point data set.
  • Step S1062 calculating spatial relative positions of the cameras relative to the biometrics according to the optical information of the plurality of visible light cameras, and calculating spatial depth information of the feature points in the plurality of biometric images according to the relative positions.
  • Step S1063 Generate feature point cloud data of the biometric feature according to the matched feature point data set and the spatial depth information of the feature point.
  • the features of the respective feature points in the plurality of biometric images may be described by using a SIFT (Scale-Invariant Feature Transform) feature descriptor.
  • SIFT Scale-Invariant Feature Transform
  • the SIFT feature descriptor has 128 feature description vectors, which can describe the 128 aspects of any feature point in direction and scale, significantly improve the accuracy of feature description, and the feature descriptor has spatial independence.
  • the spatial relative position of each camera relative to the biometric feature is calculated according to the optical information of the plurality of visible light cameras.
  • the embodiment of the present invention provides an optional solution, in which, according to the solution, The optical information of the visible light camera is calculated by the beam adjustment method to determine the relative position of each camera relative to the biological features in space.
  • the beam adjustment method is able to extract the coordinates of the 3D point from the multi-view information and The relative position of each camera and the process of optical information.
  • the spatial depth information of the feature points in the plurality of biometric images mentioned in step S1062 may include: spatial position information and color information, that is, may be an X-axis coordinate of the feature point in the spatial position, and the feature point is in the space The Y-axis coordinate of the position, the Z-axis coordinate of the feature point at the spatial position, the value of the R channel of the color information of the feature point, the value of the G channel of the color information of the feature point, the value of the B channel of the color information of the feature point, and the feature The value of the alpha channel of the color information of the point, and so on.
  • the generated feature point cloud data includes spatial location information and color information of the feature points, and the format of the feature point cloud data can be as follows:
  • Xn represents the X-axis coordinate of the feature point at the spatial position
  • Yn represents the Y-axis coordinate of the feature point at the spatial position
  • Zn represents the Z-axis coordinate of the feature point at the spatial position
  • Rn represents the value of the R-channel of the color information of the feature point.
  • Gn represents the value of the G channel of the color information of the feature point
  • Bn represents the value of the B channel of the color information of the feature point
  • An represents the value of the alpha channel of the color information of the feature point.
  • the biological feature of the plane 2D plus the time dimension constitutes a 3D biological feature, and completely restores various features of the biological feature in space, providing unlimited possibilities for subsequent application of biometric data. .
  • the 3D model of the biometric feature is constructed according to the feature point cloud data in step S108 above, specifically, the reference size of the 3D model to be constructed is set; and then the reference size and feature point cloud data are further determined according to the reference size and the feature point cloud data.
  • the spatial position information determines the spatial size of each feature point in the feature point cloud data, thereby constructing a 3D model of the biometric.
  • 3D data describing spatial shape feature data of the 3D model may be included in the embodiment of the present invention. limit.
  • time data of collecting biometric information by multiple visible light cameras may also be recorded, thereby constructing a 3D model of biometrics with time dimension according to feature point cloud data and time data to realize biometrics.
  • Collection of four-dimensional data may be a plurality of 3D data sets of the same time interval or different time intervals, different angles, different orientations, different expression forms, and the like.
  • a plurality of visible light cameras may be disposed, and a method of arranging multiple visible light cameras may include The following steps S202 to S204.
  • Step S202 constructing a support structure, and setting an arc-shaped load-bearing structure on the support structure;
  • Step S204 placing a plurality of visible light cameras on the curved load bearing structure.
  • the embodiment of the present invention uses multiple visible light camera control technologies to collect biometric information, which can significantly improve the collection efficiency of biometric information. Also, a plurality of cameras are arranged to form a camera matrix on the curved load bearing structure.
  • step S102 is also different, which will be described in detail below.
  • the base connected to the support structure may be built, and the seat for fixing the photographed position of the human body is set on the base; when the person is seated On the chair, the head, face and/or iris information is acquired using a camera matrix consisting of multiple visible light cameras arranged on the curved load bearing structure.
  • the display can also be placed on the curved load bearing structure; after the 3D model of the head face is constructed, the head face 3D data is visually displayed on the display.
  • camera parameters such as sensitivity, shutter speed, and zoom can be set through the display interface.
  • the embodiment of the present invention is not limited to the multiple, the aperture, and the like.
  • the device may include:
  • the base 31 serves as a main bottom support structure for the entire inventive device
  • the seat 32 fixes the position of the human body and adjusts the height of the human body
  • Support structure 33 connecting the bottom of the device and other body mechanisms
  • Display 34 an operation interface for the operation of the device system
  • Carrying structure 35 fixed structure of camera, central processing unit and light;
  • Camera matrix 36 facial information collection of the human head
  • Strip fill light 37 ambient light is used in addition.
  • the base 31 is connected to the seat 32 through a connecting structure
  • the base 31 is connected to the support structure 33 through a mechanism connection structure
  • the support structure 33 is connected to the support structure 35 by a mechanical connection structure
  • the display 34 is mechanically fixed to the carrying structure 35;
  • the camera matrix 36 is fixed on the carrying structure 35 by structural fixing;
  • the band fill lamp 37 is fixed to the carrier structure 35 by means of a structural fixing.
  • the internal module of the load bearing structure 35 can be composed of the following parts:
  • a power management module that is responsible for providing the various power supplies required for the entire system
  • the light management module can adjust the brightness of the light through the central processing module
  • the serial port integration module is responsible for two-way communication between the central processing module and the camera matrix
  • Central processing module responsible for system information processing, display, lighting, seat control;
  • Seat lift management module responsible for seat height adjustment
  • the display driver management module is responsible for the display driver of the display.
  • the power management module provides power to the camera matrix, the serial port integration module, the light management module, the central processing module, the display drive management module, and the seat lift management module;
  • the serial port integration module connects the camera matrix and the central processing module to realize two-way communication between them, as shown in FIG. 5;
  • the camera is connected to the serial port integration module in a single serial manner.
  • Serial port integration module is connected to the central processing module via USB interface
  • the central processing module realizes the visualization operation of the camera matrix through the customized development software interface
  • the camera interface parameters can be set on the operation interface.
  • the operation interface can realize the initialization operation of turning on the camera.
  • Operation interface can realize the command of camera image acquisition
  • the operation interface can realize the setting of camera image storage path
  • Operation interface can realize real-time camera browsing and camera switching
  • the light management module is connected to the power management module, the central processing module, and the external band fill light;
  • the seat lift management module is connected to the power management module, the central processing module and the external seat, and the central processing module realizes the up and down adjustment of the seat height through the visual interface;
  • the display driver management module is connected to the power management module, the central processing module, and an external display;
  • the central processing module is connected to the power management module, the light management module, the seat lift management module, the serial port integration module, and the display drive management module.
  • the startup matrix camera starts to collect information on the human head and face.
  • the information collection time is completed within 0.8 seconds, and the collected signals are finally transmitted to the central processing in the format of digital image (.jpg).
  • the module is processed, and the core of the central processing module consists of the following parts:
  • CPU Central Processing Unit: responsible for the transmission scheduling of the entire digital signal, task allocation, memory management, and some single calculation processing;
  • C.2 GPU Graphics Processing Unit: Selects a special model GPU with excellent image processing capabilities and efficient computing capabilities.
  • C.3 DRAM Dynamic Random Access Memory
  • the signal collected by the matrix camera is transmitted to the central processing module for signal processing.
  • D.1 information processing process is as follows
  • image filtering can be completed quickly with the support of certain algorithms.
  • the format of various information of this device is image format, combined with GPU with excellent image processing capability, the information content of jpg can be evenly distributed to the block of GPU. Since the device uses dual GPUs, each GPU has 56 blocks, so the 18 jpg images captured by the acquisition information are evenly distributed to 112 blocks for calculation, combined with the centralized scheduling and allocation functions of the CPU. It can quickly calculate the feature points of each photo. Compared with the operation of a separate CPU or CPU with other common models of GPUs, the overall operation speed time is 1/10 or less of the latter.
  • Image feature points are extracted using the hierarchical structure of the pyramid and the special algorithm of spatial scale invariance. These two special algorithms are combined with the special structure of the GPU selected by the device to maximize the computing performance of the system and achieve fast extraction. Feature points in image information.
  • the feature descriptor of this process uses SIFT feature descriptor.
  • the SIFT feature descriptor has 128 feature description vectors, which can describe the 128 features of any feature point in direction and scale, and significantly improve the accuracy of feature description.
  • the descriptor has spatial independence.
  • the special image processing GPU used in this device has excellent calculation and processing capabilities of individual vectors. For SIFT feature vectors with 128 special descriptors, it is most suitable for processing under the conditions of such special GPUs. To take advantage of the special computing power of the GPU, compare the normal CPU or CPU with other common GPUs, and the matching time of feature points will be reduced by 70%.
  • the system uses the algorithm of the beam adjustment method to calculate the relative position of the camera relative to the head face. According to the spatial coordinates of the relative position, the GPU can quickly calculate the depth of the feature points of the head face. information.
  • the depth information of the head and face feature points in space is calculated. Due to the vector computing capability of the GPU, the spatial position and color information of the head face feature point cloud can be quickly matched to form a standard model establishment requirement. Point cloud information.
  • the initial reference size is set for the size of the entire model by the criteria of the feature point cloud size.
  • the special calibration has a spatially determined size, and since the head facial feature point cloud has spatially uniform dimensionality, the size between any feature points of the head face is determined by the special calibration of the size. It can be calculated from the spatial position coordinates of the point cloud.
  • the format of 3D data has the following files:
  • .mtl - describes the surface material and lighting characteristics of the 3D model
  • Head face 3D data is displayed on the display by a visual method.
  • A. Human hand visible light camera 3D data acquisition device can include:
  • the cabinet 61 serves as the main body support structure of the entire device
  • the camera matrix 62 collects human hand features, which may specifically be finger features and/or palm features;
  • Transparent glass cover plate 63 a placement device for the human hand
  • Central control module 64 system information processing, analysis, display module
  • Hand virtual position 65 description of the placement position of the human hand
  • the shadowless lighting system 66 provides a lighting environment for hand 3D modeling.
  • the cabinet 61 is connected to the camera matrix 62 by mechanical fixing;
  • the cabinet 61 is connected to the transparent glass cover 63 by means of mechanical structure fixing;
  • the central control module 64 is connected to the cabinet 61 by mechanical fixing;
  • the shadowless lighting system 66 is connected to the cabinet 61 by mechanical fixing;
  • the hand virtual position 65 ensures that the human hand as a whole falls within the range of information acquisition of the camera matrix 62.
  • the entire central control module consists of the following components:
  • a power management module that provides power to the entire device
  • Serial port integration module responsible for command and data transfer of camera matrix and central processing module
  • the lighting management module is responsible for managing the external shadowless lighting system
  • Display driver module responsible for management of the display module
  • B.1 power management module provides power to the camera matrix, serial port integration module, central processing module, lighting management module, display driver module, display module;
  • the B.2 serial port integration module realizes two-way communication between the camera matrix and the central processing module
  • the lighting management module provides power to the shadowless lighting system and is responsible for adjusting the parameters of the lighting system
  • B.4 central processing module is connected to the serial port integration module, the light management module, the display driver module and the power management module;
  • the display driver module is connected to the central processing module and the display module.
  • start device After turning on the power switch, start the power management module to supply power to each module of the system, and simultaneously start the camera matrix, central control module, shadowless lighting system and display module.
  • D. Information collection After the parameters are set, the camera matrix is started to start the information of the opponent, and the collected information will be transmitted to the central control module for analysis and processing in the format of the picture.
  • E. Information processing The signal collected by the camera matrix is transmitted to the central control module for signal processing.
  • the process of information processing is as follows.
  • the fingertip fingerprint feature also has the unique biometric feature. Therefore, after collecting the feature points of the hand finger, the non-finger end information needs to be adopted first.
  • the algorithm's method is filtered out. The overall idea of the whole algorithm is as follows:
  • D.1.1 establish a library file of the joint pattern of the finger end and the second finger and a feature library of the finger joint pattern
  • Hand 3D data display The 3D data of the hand is displayed on the display by a visual method.
  • D.2, D.3, and D.4 can be referred to the foregoing introduction, and will not be described again here.
  • the biometric 3D data collected in step S01 is stored, and the collected biometric 3D data is stored by using the identity information (I1, I2...In) of the living body as an identification mark to form the included
  • a database of a plurality of biometric 3D data (D1, D2, . . . Dn) for example, 3D data D1 is stored with the identity information I1 of the organism as a file name, and 3D data D2 of another organism is identified by the identity information of the organism. It is stored as a file name, and so on, forming a database including n biometric 3D data.
  • the identity information I includes but is not limited to one or more of a person: name, gender, age, and document number, and the document number may include a person who often uses in life. For example, one or more of an ID number, a passport number, a driver's license number, a social security number, or a military officer number.
  • the biometric 3D data (T1, T2...Tn) and the database of the target organism (ie, the organism to be identified) are identified by the Tianmu point cloud alignment method.
  • the biometric 3D data (D1, D2...Dn) stored in the comparison is compared to identify the identity of the target organism.
  • the method for identifying the eye-point cloud comparison method includes the following steps:
  • the Tianmu point cloud comparison identification method further comprises the following specific steps:
  • the feature point fitting is performed by the method based on direct matching of spatial domain. Three or more feature points are selected as the matching key points in the corresponding rigid regions of the two point clouds, and the feature points are directly matched by coordinate transformation;
  • the least squares method is used to calculate the similarity.
  • the point cloud is a basic element constituting a 3D model, which includes spatial coordinate information (XYZ) and color information (RGB).
  • Point cloud attributes include spatial resolution, point precision, surface normal vectors, and more. Its characteristics are not affected by external conditions and will not change for translation and rotation.
  • Reverse software enables the editing and processing of point clouds such as imageware, geomagic, catia, copycad and rapidform.
  • the method of direct matching based on spatial domain specific to the celestial point cloud comparison method includes: Iterative closest point method (ICP), the ICP method is usually divided into two steps, the first step is the feature point fitting, and the second step is the overall surface. Good fit. The purpose of fitting the feature points first is to find and align the two point clouds to fit the fit in the shortest time. But it is not limited to this. For example, it can be:
  • three or more feature points are selected as the matching key points in the corresponding rigid regions of the two point clouds, and the feature points are directly matched by coordinate transformation.
  • ICP is used for registration of curves or surface segments, which is a very effective tool in 3D data reconstruction. Given the rough initial alignment conditions of two 3D models, ICP iteratively seeks rigid transformation between the two to minimize Alignment errors to achieve registration of spatial geometric relationships between the two.
  • the ICP registration technique iteratively solves the nearest points of the distance, establishes the transformation matrix, and implements transformation on one of them until the convergence condition is reached, and the iteration stops.
  • the coding is as follows:
  • the face model is mainly divided into a rigid model part and a plastic model part, and the plastic deformation affects the accuracy of the alignment, thereby affecting the similarity.
  • One solution is to select feature points only in the rigid region.
  • the feature points are attributes extracted from an object and remain stable under certain conditions.
  • the commonly used method iterates the nearest point ICP feature points for fitting alignment.
  • the human hand knuckles are rigid areas, the palm part is a plastic area, and the feature points are selected optimally in the finger area.
  • the iris is a rigid model.
  • the similarity of the input model is calculated by aligning two 3D biometric model point clouds, wherein the registration error is used as a difference metric.
  • Step 2 After the feature points are best fitted, the data of the point cloud after the best fit of the overall surface is aligned.
  • the third step is the similarity calculation. Least squares
  • the least squares method (also known as the least squares method) is a mathematical optimization technique. It finds the best function match for the data by minimizing the sum of the squares of the errors.
  • the least squares method can be used to easily obtain unknown data and minimize the sum of the squares of the errors between the obtained data and the actual data.
  • the least squares method can also be used for curve fitting.
  • Other optimization problems can also be expressed by least squares by minimizing energy or maximizing entropy. It is often used to solve the curve fitting problem and solve the complete fitting of the surface. The iterative algorithm can speed up data convergence and quickly find the optimal solution.
  • the deviation is determined by calculating the distance between the point cloud and the triangle. Therefore, the method requires a plane equation for each triangular patch whose deviation is the distance from the point to the plane.
  • the 3D data model is IGES or STEP model
  • the free-form surface expression is NURBS surface
  • the point-to-surface distance calculation needs to be calculated by numerical optimization method.
  • the deviation is expressed by iteratively calculating the minimum distance from each point in the point cloud to the NURBS surface, or the NURBS surface is discretized at a specified scale, and the point deviation is approximated by the point-to-point distance, or converted into an STL format for deviation calculation.
  • Different coordinate alignment and deviation calculation methods have different detection results. The size of the alignment error will directly affect the accuracy of the detection and the credibility of the evaluation report.
  • the best fit alignment is to detect the deviation average to the whole, to ensure that the overall deviation is the minimum condition to terminate the alignment process of the iterative calculation, perform 3D analysis on the registration result, and generate the result object in the form of the root mean square of the error between the two figures.
  • the output the larger the root mean square, the greater the difference between the two models. The opposite is also true. According to the ratio of the coincidence degree, it is judged whether it is the target of the comparison.
  • the invention also provides a biometric 3D data recognition system based on visible light photographing, which comprises the following devices:
  • the biometric information collecting device is configured to collect a plurality of biometric images of the living body, and construct a 3D model of the biometrics according to the plurality of biometric images to realize biometric 3D data collection of the living body;
  • the biometric 3D data storage device is configured to store the collected biometric 3D data by using the identity information (I1, I2...In) of the living body as an identification mark to form a plurality of biometric 3D data (D1, D2...Dn). Database)
  • An identification device for the target organism for collecting biometric 3D data (T1, T2...Tn) of the target organism, and using the identity information (I1, I2...In) of the target organism to find the biometric 3D stored in the database Data (D1, D2...Dn) for comparing the biometric 3D data (T1, T2...Tn) of the target organism with the biometric 3D data (D1, D2...Dn) stored in the corresponding database, To identify the identity of the target organism.
  • the biometric information collecting device comprises:
  • An image acquisition unit is configured to collect biometric information by using a camera matrix composed of a plurality of cameras to obtain a plurality of biometric images;
  • a feature point extracting unit configured to process a plurality of biometric images to extract respective feature points in the plurality of biometric images
  • a point cloud generating unit configured to generate feature point cloud data of the biometric based on the respective feature points in the extracted plurality of biometric images
  • the 3D model building unit is configured to construct a 3D model of the biometric feature according to the feature point cloud data to implement the collection of the biometric 3D data.
  • biometric features in the embodiments of the present invention are not limited to the above-mentioned head, face, and/or iris and hand, and may include other biological features, such as the foot, etc. limit.
  • biometric 3D data recognition method and system based on visible light photographing provided by the embodiments of the present invention are further described below by using specific embodiments.
  • the biometric 3D data acquisition and recognition system of the present invention can adopt a system with the acquisition system, or can adopt two sets of systems respectively.
  • a 3D data identification system for header information, face information, and/or iris information is shown in FIG. 3, and the system may include:
  • the base 31 serves as a main bottom support structure for the entire inventive device
  • the seat 32 fixes the position of the human body and adjusts the height of the human body
  • Support structure 33 connecting the bottom of the device and other body mechanisms
  • Display 34 an operation interface for the operation of the device system
  • Carrying structure 35 fixed structure of camera, central processing unit and light;
  • Camera matrix 36 3D data acquisition of human head information, facial information and/or iris information
  • Strip fill light 37 ambient light is used in addition.
  • the base 31 is connected to the seat 32 through a connecting structure
  • the base 31 is connected to the support structure 33 through a mechanism connection structure
  • the support structure 33 is connected to the support structure 35 by a mechanical connection structure
  • the display 34 is mechanically fixed to the carrying structure 35;
  • the camera matrix 36 is fixed on the carrying structure 35 by structural fixing;
  • the band fill lamp 37 is fixed to the carrier structure 35 by means of a structural fixing.
  • the internal module of the load bearing structure 35 can be composed of the following parts:
  • a power management module that is responsible for providing the various power supplies required for the entire system
  • the light management module can adjust the brightness of the light through the central processing module
  • the serial port integration module is responsible for two-way communication between the central processing module and the camera matrix
  • Central processing module responsible for system information processing, display, lighting, seat control;
  • the central processing module further includes an identification module for identifying the target organism, firstly selecting the biometric 3D data (T1, T2...Tn) of the target organism and the biometric 3D data stored in the corresponding database (D1). , D2...Dn) to perform the comparison, and then use the Tianmu point cloud comparison method to identify the identity of the target organism;
  • Seat lift management module responsible for seat height adjustment
  • the display driver management module is responsible for the display driver of the display.
  • the power management module provides power to the camera matrix, the serial port integration module, the light management module, the central processing module, the display drive management module, and the seat lift management module;
  • the serial port integration module connects the camera matrix and the central processing module to realize two-way communication between them, as shown in FIG. 5;
  • the camera is connected to the serial port integration module in a single serial manner.
  • Serial port integration module is connected to the central processing module via USB interface
  • the central processing module realizes the visualization operation of the camera matrix through the customized development software interface
  • the camera interface parameters can be set on the operation interface.
  • the operation interface can realize the initialization operation of turning on the camera.
  • Operation interface can realize the command of camera image acquisition
  • the operation interface can realize the setting of camera image storage path
  • Operation interface can realize real-time camera browsing and camera switching
  • the light management module is connected to the power management module, the central processing module, and the external band fill light;
  • the seat lift management module is connected to the power management module, the central processing module and the external seat, and the central processing module realizes the up and down adjustment of the seat height through the visual interface;
  • the display driver management module is connected to the power management module, the central processing module, and an external display;
  • the central processing module is connected to the power management module, the light management module, the seat lift management module, the serial port integration module, and the display drive management module.
  • the device is used as follows
  • the startup matrix camera starts to collect information on the human head and face.
  • the information collection time is completed within 0.8 seconds, and the collected signals are finally transmitted to the central processing in the format of digital image (.jpg).
  • the module is processed, and the core of the central processing module consists of the following parts:
  • CPU Central Processing Unit: responsible for the transmission scheduling of the entire digital signal, task allocation, memory management, and some single calculation processing;
  • C.2 GPU Graphics Processing Unit: Selects a special model GPU with excellent image processing capabilities and efficient computing capabilities.
  • C.3 DRAM Dynamic Random Access Memory
  • the signal collected by the matrix camera is transmitted to the central processing module for signal processing.
  • D.1 information processing process is as follows
  • image filtering can be completed quickly with the support of certain algorithms.
  • the format of various information of this device is image format, combined with GPU with excellent image processing capability, the information content of jpg can be evenly distributed to the block of GPU. Since the device uses dual GPUs, each GPU has 56 blocks, so the 18 jpg images captured by the acquisition information are evenly distributed to 112 blocks for calculation, combined with the centralized scheduling and allocation functions of the CPU. It can quickly calculate the feature points of each photo. Compared with the operation of a separate CPU or CPU with other common models of GPUs, the overall operation speed time is 1/10 or less of the latter.
  • Image feature points are extracted using the hierarchical structure of the pyramid and the special algorithm of spatial scale invariance. These two special algorithms are combined with the special structure of the GPU selected by the device to maximize the computing performance of the system and achieve fast extraction. Feature points in image information.
  • the feature descriptor of this process uses SIFT feature descriptor.
  • the SIFT feature descriptor has 128 feature description vectors, which can describe the 128 features of any feature point in direction and scale, and significantly improve the accuracy of feature description.
  • the descriptor has spatial independence.
  • the special image processing GPU used in this device has excellent calculation and processing capabilities of individual vectors. For SIFT feature vectors with 128 special descriptors, it is most suitable for processing under the conditions of such special GPUs. To take advantage of the special computing power of the GPU, compare the normal CPU or CPU with other common GPUs, and the matching time of feature points will be reduced by 70%.
  • the system uses the algorithm of the beam adjustment method to calculate the relative position of the camera relative to the head face. According to the spatial coordinates of the relative position, the GPU can quickly calculate the depth of the feature points of the head face. information.
  • the depth information of the head and face feature points in space is calculated. Due to the vector computing capability of the GPU, the spatial position and color information of the head face feature point cloud can be quickly matched to form a standard model establishment requirement. Point cloud information.
  • the initial reference size is set for the size of the entire model by the criteria of the feature point cloud size.
  • the special calibration has a spatially determined size, and since the head facial feature point cloud has spatially uniform dimensionality, the size between any feature points of the head face is determined by the special calibration of the size. It can be calculated from the spatial position coordinates of the point cloud.
  • the format of 3D data has the following files:
  • .mtl - describes the surface material and lighting characteristics of the 3D model
  • Head face 3D data is displayed on the display by a visual method.
  • the identification module in the central processing module finds the biometric 3D data (D1, D2...Dn) stored in the database according to the identity information (I1, I2...In) of the target organism, and biometrics of the target organism
  • the 3D data (T1, T2...Tn) are respectively compared with the biometric 3D data (D1, D2...Dn) stored in the corresponding database to identify the identity of the target organism, and the recognition result output is displayed on the display. on.
  • the embodiment of the present invention further provides a visible light camera based biometric 3D data collecting device.
  • the seat 32 may not be included. As shown in FIG. 6, it includes 61 support bases, 62 identification devices, 63 control and display devices, 64 camera matrices, and 65 arc-shaped load-bearing mechanisms. , 66 arc-shaped fill light, when collecting and identifying, the human body stands in the U-shaped area enclosed by the device.
  • FIG. 7 is a block diagram showing the structure of a biometric 3D data acquisition device based on visible light photographing according to an embodiment of the invention.
  • the apparatus may include an image acquisition unit 910, a feature point extraction unit 920, a point cloud generation unit 930, and a 3D model construction unit 940.
  • the image acquisition unit 910 is configured to collect biometric information by using a camera matrix composed of multiple visible light cameras to obtain a plurality of biometric images.
  • the feature point extraction unit 920 is coupled to the image acquisition unit 910 for processing a plurality of biometric images to extract respective feature points in the plurality of biometric images;
  • the point cloud generating unit 930 is coupled to the feature point extracting unit 920, and configured to generate feature point cloud data of the biometric feature based on the respective feature points in the extracted plurality of biometric images;
  • the 3D model building unit 940 is coupled to the point cloud generating unit 930 for constructing a 3D model of the biometric according to the feature point cloud data to implement biometric 3D data collection.
  • the point cloud generating unit 930 is further configured to:
  • the feature point cloud data of the biometric is generated according to the matched feature point data set and the spatial depth information of the feature point.
  • the features of the respective feature points in the plurality of biometric images are described using scale invariant feature transform SIFT feature descriptors.
  • the point cloud generating unit 930 is further configured to:
  • the relative position of each camera relative to the biological features in space is calculated by the beam adjustment method.
  • the spatial depth information of the feature points in the plurality of biometric images comprises: spatial location information and color information.
  • the foregoing 3D model building unit 940 is further configured to:
  • the spatial size of each feature point in the feature point cloud data is determined, thereby constructing a 3D model of the biometric.
  • the 3D model of the biometric includes at least one of the following 3D data:
  • the apparatus shown in FIG. 7 above may further include:
  • the camera matrix layout unit 1010 is coupled to the image acquisition unit 910 for arranging multiple visible light cameras by the image acquisition unit 910 before acquiring the biometric information using a camera matrix composed of multiple visible light cameras:
  • a plurality of visible light cameras are arranged on the curved load bearing structure.
  • a plurality of cameras are arranged to form a camera matrix on the curved load bearing structure.
  • the image capturing unit 910 is further configured to:
  • the head face information is acquired using a camera matrix composed of a plurality of visible light cameras disposed on the curved load bearing structure.
  • the apparatus shown in FIG. 7 above may further include:
  • the first display unit 1020 is coupled to the 3D model building unit 940 for setting a display on the curved carrying structure; after constructing the 3D model of the head face, visually displaying the head face 3D data on the display.
  • the image collecting unit 910 is further configured to:
  • the camera parameters of each camera are set through the display interface.
  • the embodiment of the invention provides a biometric 3D data recognition method and system based on visible light photographing.
  • the biometric information is collected by using a camera matrix composed of multiple visible light cameras to obtain a plurality of biometric images; Processing a plurality of biometric images to extract respective feature points in the plurality of biometric images; and subsequently generating feature point cloud data of the biometrics based on the respective feature points in the extracted plurality of biometric images; Point cloud data constructs a 3D model of biometrics to enable acquisition of biometric 3D data.
  • the embodiment of the present invention uses multiple visible light camera control technologies to collect biometric information, which can significantly improve the collection efficiency of biometric information.
  • the embodiment of the present invention utilizes the feature information of the biometrics collected in space. The complete restoration of the spatial characteristics of biometrics provides unlimited possibilities for the subsequent application of biometric data.
  • the embodiment of the present invention can realize the processing of the feature information and the generation of the point cloud quickly and efficiently based on the parallel computing of the central processing unit and the graphics processor. Moreover, using the scale-invariant feature transform SIFT feature descriptor combined with the parallel computing power of the special graphics processor, the matching of feature points and the generation of spatial feature point clouds can be quickly realized.
  • the unique size calibration method can accurately and quickly extract the spatial size of any feature points of biometrics, and generate 3D models of biometrics to achieve 3D data acquisition.
  • modules in the devices of the embodiments can be adaptively changed and placed in one or more devices different from the embodiment.
  • the modules or units or components of the embodiments may be combined into one module or unit or component, and further they may be divided into a plurality of sub-modules or sub-units or sub-components.
  • any combination of the features disclosed in the specification, including the accompanying claims, the abstract and the drawings, and any methods so disclosed, or All processes or units of the device are combined.
  • Each feature disclosed in this specification (including the accompanying claims, the abstract and the drawings) may be replaced by alternative features that provide the same, equivalent or similar purpose.
  • the various component embodiments of the present invention may be implemented in hardware, or in a software module running on one or more processors, or in a combination thereof.
  • a microprocessor or digital signal processor may be used in practice to implement some of some or all of the components of a visible light camera based biometric 3D data acquisition device in accordance with embodiments of the present invention. Or all features.
  • the invention can also be implemented as a device or device program (e.g., a computer program and a computer program product) for performing some or all of the methods described herein.
  • Such a program implementing the invention may be stored on a computer readable medium or may be in the form of one or more signals. Such signals may be downloaded from an Internet website, provided on a carrier signal, or provided in any other form.

Abstract

The present invention provides a biological feature 3D data acquisition method and a biological feature 3D data recognition method. According to the acquisition method, multiple biological feature images of an organism are acquired by means of a camera, and according to the multiple biological feature images, a biological feature 3D model is built, so as to implement biological feature 3D data acquisition of the organism; the identify information of the organism is used as an identifier to form a database comprising multiple pieces of biological feature 3D data; and biological feature 3D data stored in the database is found using the identify information of a target organism, and corresponding point cloud comparison is performed to identify the target organism. Also provided is a biological feature 3D data recognition system based on visible light photography. The present invention improves the efficiency of biological feature information acquisition and recognition, the biological features in space are completely recovered using the acquired biological feature information in space, and great possibility is provided for applications such as identification.

Description

生物特征3D数据采集方法和生物特征3D数据识别方法Biometric 3D data acquisition method and biometric 3D data recognition method 技术领域Technical field
本发明涉及生物特征识别技术领域,特别是一种生物特征3D数据采集方法、识别方法。The invention relates to the field of biometric identification technology, in particular to a biometric 3D data acquisition method and a recognition method.
背景技术Background technique
生物特征即生物固有的生理或行为特征,如指纹、掌纹、虹膜或人脸等。生物特征有一定的唯一性和稳定性,即任何两生物的某种生物特征之间的差异比较大,且生物特征一般不会随着时间发生很大的变化,这就使得生物特征很适合应用在身份认证或识别系统中的认证信息等场景中。A biological characteristic is an inherent physiological or behavioral characteristic of a living being, such as a fingerprint, a palm print, an iris, or a human face. Biometrics have certain uniqueness and stability, that is, the difference between certain biological characteristics of any two organisms is relatively large, and biological characteristics generally do not change greatly with time, which makes biometrics suitable for application. In scenarios such as authentication information in an authentication or identification system.
目前的生物特征数据都是空间平面的2D数据,以头部面部的生物特征为例,有关头部面部的数据应用都停留在简单的图片应用上,即只能从某个特定的角度来对头部面部数据进行处理,识别以及其他方面的应用;又以手部的生物特征为例,主要采用2D的方式来识别某一个或者几个手部的特征,部分不法分子根据手部采集到的2D图片,仿制2D手部特征,骗过部分识别系统,给个人信息安全带来了很大的安全隐患。The current biometric data is 2D data of the spatial plane. Taking the biometrics of the head and face as an example, the data application of the head and face is stuck in a simple image application, that is, only from a certain angle. The head and face data are processed, identified and applied in other aspects. Taking the biological characteristics of the hand as an example, the 2D method is mainly used to identify the characteristics of one or several hands, and some illegal molecules are collected according to the hand. 2D pictures, imitation of 2D hand features, deceived part of the identification system, bringing a great security risk to personal information security.
因此,亟需针对生物特征进行3D数据识别,提高安全性,并为后续的应用提供支撑。Therefore, there is an urgent need for 3D data recognition for biometrics to improve safety and support for subsequent applications.
发明内容Summary of the invention
鉴于上述问题,提出了本发明以便提供一种克服上述问题或者至少部分地解决上述问题的基于可见光拍照的生物特征3D数据识别方法。In view of the above problems, the present invention has been made in order to provide a visible light photographing based biometric 3D data recognizing method that overcomes the above problems or at least partially solves the above problems.
本发明提出了一种生物特征3D数据采集方法,包括:The invention provides a biometric 3D data acquisition method, comprising:
A.启动设备:打开电源开关后,启动电源管理模块给系统各个模块提供电源,并同时启动相机矩阵、中央控制模块、无影灯光系统以及显示模块;A. Start device: After turning on the power switch, start the power management module to supply power to each module of the system, and simultaneously start the camera matrix, central control module, shadowless lighting system and display module;
B.人体手部放置:将人体的手部放置在透明玻璃盖板上,通过调整手部的位置,使手部的信息全部落在信息采集的方位内,由于采用无影灯光系统,各个角度采集的手部信息没有阴影;该设备包括手部虚拟位置,提供人体手部的放置位置说明,确保人体手部整体落在相机矩阵信息采集的范围内;B. Human hand placement: Place the human hand on the transparent glass cover. By adjusting the position of the hand, the information of the hand is all within the orientation of the information collection. Due to the use of the shadowless lighting system, the angles are collected. The hand information is not shaded; the device includes a virtual position of the hand, providing a description of the placement position of the human hand, ensuring that the entire human hand falls within the range of camera matrix information collection;
C.参数设置:通过显示器界面,可以设定相机矩阵拍照的各项参数;C. Parameter setting: Through the display interface, various parameters of the camera matrix can be set;
D.信息采集:参数设置完毕,启动相机矩阵开始对手部的信息进行采集,采集的信息会以图片的格式传到中央控制模块进行分析和处理;利用多台可见光相机组成的相机矩阵对生物特征信息进行采集,得到多幅生物特征图像;D. Information collection: After the parameters are set, the camera matrix is started to start the information of the opponent's part, and the collected information will be transmitted to the central control module for analysis and processing in the form of pictures; the camera matrix composed of multiple visible light cameras will be used for biometrics. Information is collected to obtain multiple biometric images;
对所述多幅生物特征图像进行处理,提取所述多幅生物特征图像中各自的特征点;Processing the plurality of biometric images to extract respective feature points in the plurality of biometric images;
E.信息处理:相机矩阵采集完的信号传送到中央控制模块进行信号处理,基于提取的所述多幅生物特征图像中各自的特征点,生成生物特征的特征点云数据,包括:根据提取的所述多幅生物特征图像中各自的特征点的特征,进行特征点的匹配,建立匹配的特征点数据集;根据多台可见光相机的光学信息,采用光束平差法计算各台相机相对于生物特征在空间上的相对位置,并根据所述相对位置计算出所述多幅生物特征图像中的特征点的空间深度信息;根据匹配的特征点数据集和特征点的空间深度信息,生成生物特征的特征点云数据;E. Information processing: the signal collected by the camera matrix is transmitted to the central control module for signal processing, and the feature point cloud data of the biometric feature is generated based on the extracted feature points of the plurality of biometric images, including: according to the extracted Characterizing the respective feature points in the plurality of biometric images, performing feature point matching, establishing a matching feature point data set; calculating the respective cameras relative to the living body by using the beam adjustment method according to the optical information of the plurality of visible light cameras Calculating the relative position of the feature in space, and calculating spatial depth information of the feature points in the plurality of biometric images according to the relative position; generating biometrics according to the matched feature point data set and the spatial depth information of the feature points Feature point cloud data;
包括:include:
E.1采集图像的滤波E.1 Filtering the acquired image
由于人体手部的主要特征采集点集中在手部指部的顶端,指端的指纹特征也是具有唯一生物识别的特征,所以在采集到手部指部的特征点后,首先需要将非指端的信息采用算法的方法滤掉,整个算法的整体思路如下:Since the main feature collection point of the human hand is concentrated on the top of the hand finger, the fingertip fingerprint feature also has the unique biometric feature. Therefore, after collecting the feature points of the hand finger, the non-finger end information needs to be adopted first. The algorithm's method is filtered out. The overall idea of the whole algorithm is as follows:
E.1.1建立指端和第二指部的关节纹的库文件以及指部关节纹的特征库;E.1.1 establishing a library file of the joint pattern of the finger end and the second finger and a feature library of the finger joint pattern;
E.1.2导入特征库针对指部采集到的信息进行特征识别;E.1.2 Importing the feature library to perform feature recognition on the information collected by the finger;
E.1.3特征识别后,针对特征区的区域进行计算,计算出指部指端的特征区的范围;E.1.3 After feature recognition, calculate the area of the feature area and calculate the range of the feature area of the fingertip;
E.1.4特征区和非指部特征区的图像分割;E.1.4 Image segmentation of the feature area and the non-finger feature area;
E.1.5非指部特征区的信息从原始图像剔除;E.1.5 The information of the non-finger feature area is removed from the original image;
E.1.6新特征区域的信息做进一步的滤波处理;E.1.6 The information of the new feature area is further filtered;
E.2采集图像的特征点提取;E.2 feature point extraction of the acquired image;
E.3采集图像的匹配和空间深度信息的计算;E.3 acquisition of image matching and calculation of spatial depth information;
E.4特征点云数据的生成;E.4 generation of feature point cloud data;
根据所述特征点云数据构建生物特征的3D模型,以实现生物特征3D数据的采集;包括:设定待构建的3D模型的参考尺寸;根据所述参考尺寸和所述特征点云数据的空间位置信息,确定所述特征点云数据中各个特征点的空间尺寸,从而构建生物特征的3D模型;Constructing a 3D model of the biometric according to the feature point cloud data to implement acquisition of the biometric 3D data; comprising: setting a reference size of the 3D model to be constructed; and calculating a space according to the reference size and the feature point cloud data Position information, determining a spatial size of each feature point in the feature point cloud data, thereby constructing a 3D model of the biometric feature;
记录多台可见光相机采集生物特征信息的时间数据,从而根据特征点云数据和时间数据,构建具有时间维度的生物特征的3D模型,以实现生物特征四维数据的采集。The time data of biometric information collected by multiple visible light cameras is recorded, and a 3D model of biometrics with time dimension is constructed according to the feature point cloud data and time data to realize the collection of biometric four-dimensional data.
可选的,所述多幅生物特征图像中各自的特征点的特征采用尺度不变特征转换SIFT特征描述子来描述。Optionally, the features of the respective feature points in the plurality of biometric images are described by using a scale invariant feature transform SIFT feature descriptor.
可选的,所述多幅生物特征图像中的特征点的空间深度信息包括:空间位置信息和颜色信息。Optionally, the spatial depth information of the feature points in the plurality of biometric images includes: spatial location information and color information.
可选的,所述生物特征的3D模型中包括下列至少之一的3D数据:Optionally, the 3D model of the biometric includes at least one of the following 3D data:
描述3D模型的空间形状特征数据;Describe the spatial shape feature data of the 3D model;
描述3D模型的表面纹理特征数据;Describe the surface texture feature data of the 3D model;
描述3D模型的表面材质和灯光特征数据。Describe the surface material and lighting feature data of the 3D model.
可选的,在利用多台可见光相机组成的相机矩阵对生物特征信息进行采集之前,所述方法还包括通过以下方式布局多台可见光相机:Optionally, before acquiring the biometric information by using a camera matrix composed of multiple visible light cameras, the method further comprises: arranging the multiple visible light cameras by:
搭建支撑结构,在所述支撑结构上设置弧形承载结构;Constructing a support structure, and setting an arc-shaped load-bearing structure on the support structure;
将多台可见光相机布置在所述弧形承载结构上。A plurality of visible light cameras are disposed on the curved load bearing structure.
可选的,所述支撑结构为柜体,所述弧形承载结构设置在所述柜体内,所述方法还包括:Optionally, the support structure is a cabinet, and the arc-shaped load-bearing structure is disposed in the cabinet, and the method further includes:
在所述柜体上面向多台可见光相机的镜头的一面设置透明玻璃盖板;Providing a transparent glass cover on one side of the lens facing the lens of the plurality of visible light cameras;
当生物的手部放置在所述透明玻璃盖板上时,利用布置在所述弧形承载结构上的多台可见光相机组成的相机矩阵对手部信息进行采集。When the hand of the creature is placed on the transparent glass cover, the camera matrix hand-held information composed of a plurality of visible light cameras disposed on the curved load-bearing structure is used for acquisition.
可选的,还包括:Optionally, it also includes:
在利用多台可见光相机组成的相机矩阵对手部信息进行采集之前,通过显示器界面,设定各台相机的拍照参数。Before taking advantage of the camera matrix hand-held information composed of multiple visible light cameras, the camera parameters of each camera are set through the display interface.
可选的,采用多目视觉深度计算方法,具体包括:Optionally, the multi-vision visual depth calculation method is adopted, which specifically includes:
使用多台可见光相机组成的相机矩阵对生物特征信息进行采集,得到多幅生物特征图像;The biometric information is collected by using a camera matrix composed of multiple visible light cameras to obtain a plurality of biometric images;
将所述多幅生物特征图像传送到具有图像处理器GPU和中央处理器CPU的处理单元;Transmitting the plurality of biometric images to a processing unit having an image processor GPU and a central processing unit CPU;
将所述多幅生物特征图像的图像信息分配到GPU的块block中进行运算,并结合CPU的集中调度和分配功能,计算所述多幅生物特征图像各自的特征点。The image information of the plurality of biometric images is allocated to a block block of the GPU for calculation, and combined with the centralized scheduling and allocation function of the CPU, the feature points of the plurality of biometric images are calculated.
可选的,所述GPU为双GPU,每颗GPU具有多个block。Optionally, the GPU is a dual GPU, and each GPU has multiple blocks.
本发明还提供了一种生物特征3D数据识别方法,包括如下步骤:The invention also provides a biometric 3D data identification method, comprising the following steps:
S01.采集生物特征信息,S01. Collecting biometric information,
通过可见光相机采集生物体的多幅生物特征图像,根据所述多幅生物特征图像构建生物特征的3D模型,以实现所述生物体的生物特征3D数据采集;Collecting a plurality of biometric images of the living body by using a visible light camera, and constructing a 3D model of the biometrics according to the plurality of biometric images to implement biometric 3D data collection of the living body;
S02.存储生物特征3D数据,S02. storing biometric 3D data,
以生物体的身份信息(I1、I2…In)作为识别标志对采集到的生物特征3D数据进行存储,形成包括多条生物特征3D数据(D1、D2…Dn)的数据库;Collecting the collected biometric 3D data by using the identity information (I1, I2...In) of the living body as an identification mark to form a database including a plurality of biometric 3D data (D1, D2...Dn);
S03.目标生物体的身份识别,S03. Identification of the target organism,
采集目标生物体的生物特征3D数据(T1、T2…Tn),利用所述目标生物体的身份信息(I1、I2…In)找到所述数据库中存储的生物特征3D数据(D1、D2…Dn),将所述目标生物体的生物特征3D数据(T1、T2…Tn)分别与相应的所述数据库中存储的生物特征3D数据(D1、D2…Dn)进行比对,以识别目标生物体的身份;Collecting biometric 3D data (T1, T2...Tn) of the target organism, and using the identity information (I1, I2...In) of the target organism to find biometric 3D data stored in the database (D1, D2...Dn) Correlating the biometric 3D data (T1, T2...Tn) of the target organism with the biometric 3D data (D1, D2...Dn) stored in the corresponding database to identify the target organism identity of;
所述比对方法包括如下具体步骤:采用基于空域直接匹配的方法进行特征点拟合,在两个点云的对应的刚性区域,选取三个及以上特征点作为拟合关键点,通过坐标变换,直接进行特征点对应匹配;给定两个点云粗略的初始对齐条件,寻求两者之间的刚性变换以最小化对齐误差;The comparison method comprises the following specific steps: performing feature point fitting based on spatial direct matching method, and selecting three or more feature points as matching key points in the corresponding rigid regions of the two point clouds, through coordinate transformation Directly perform feature point corresponding matching; given initial coarse alignment conditions of two point clouds, seek rigid transformation between the two to minimize alignment error;
特征点对应匹配后,整体曲面最佳拟合后的点云的数据对齐;After the feature points are matched, the data of the point cloud after the best fit of the overall surface is aligned;
采用最小二乘法进行相似度计算。The least squares method is used to calculate the similarity.
可选的,步骤S01还包括:Optionally, step S01 further includes:
通过多台可见光相机采集得到生物体的多幅生物特征图像,Collecting multiple biometric images of the organism through multiple visible light cameras,
对所述多幅生物特征图像进行处理,提取所述多幅生物特征图像中各自的特征点;Processing the plurality of biometric images to extract respective feature points in the plurality of biometric images;
基于提取的所述多幅生物特征图像中各自的特征点,生成生物特征的特征点云数据;Generating feature point cloud data of the biometric based on the extracted feature points in the plurality of biometric images;
根据所述特征点云数据构建生物特征的3D模型,以实现生物特征3D数据的采集。A 3D model of the biometric is constructed according to the feature point cloud data to implement biometric 3D data acquisition.
可选的,所述基于提取的所述多幅生物特征图像中各自的特征点,生成生物特征的特征点云数据的步骤进一步包括:Optionally, the step of generating feature point cloud data of the biometric based on the extracted feature points in the plurality of biometric images further includes:
根据提取的所述多幅生物特征图像中各自的特征点的特征,进行特征点的匹配,建立匹配的特征点数据集;Performing matching of feature points according to the extracted features of the feature points in the plurality of biometric images to establish a matching feature point data set;
根据可见光相机的光学信息,计算各台可见光相机相对于生物特征在空间 上的相对位置,并根据所述相对位置计算出所述多幅生物特征图像中的特征点的空间深度信息;Calculating, according to optical information of the visible light camera, a relative position of each visible light camera relative to the biological feature in space, and calculating spatial depth information of the feature points in the plurality of biometric images according to the relative position;
根据匹配的特征点数据集和特征点的空间深度信息,生成生物特征的特征点云数据。The feature point cloud data of the biometric is generated according to the matched feature point data set and the spatial depth information of the feature point.
可选的,所述多幅生物特征图像中各自的特征点的特征采用尺度不变特征转换SIFT特征描述子来描述;Optionally, the feature of each feature point in the plurality of biometric images is described by using a scale invariant feature transform SIFT feature descriptor;
根据多台可见光相机的光学信息,采用光束平差法计算各台可见光相机相对于生物特征在空间上的相对位置。According to the optical information of multiple visible light cameras, the relative position of each visible light camera relative to the biological features is calculated by the beam adjustment method.
可选的,所述多幅生物特征图像中的特征点的空间深度信息包括:空间位置信息和颜色信息。Optionally, the spatial depth information of the feature points in the plurality of biometric images includes: spatial location information and color information.
可选的,所述根据所述特征点云数据构建生物特征的3D模型的步骤进一步包括:Optionally, the step of constructing a 3D model of the biometric according to the feature point cloud data further includes:
设定待构建的3D模型的参考尺寸;Setting a reference size of the 3D model to be constructed;
根据所述参考尺寸和所述特征点云数据的空间位置信息,确定所述特征点云数据中各个特征点的空间尺寸,从而构建生物特征的3D模型。And determining a spatial size of each feature point in the feature point cloud data according to the reference size and spatial location information of the feature point cloud data, thereby constructing a 3D model of the biometric.
可选的,所述生物特征的3D模型中包括下列至少之一的3D数据:Optionally, the 3D model of the biometric includes at least one of the following 3D data:
描述3D模型的空间形状特征数据;Describe the spatial shape feature data of the 3D model;
描述3D模型的表面纹理特征数据;Describe the surface texture feature data of the 3D model;
描述3D模型的表面材质和灯光特征数据。Describe the surface material and lighting feature data of the 3D model.
可选的,利用多台可见光相机组成相机矩阵对生物体的生物特征信息进行采集,通过以下方式布局相机矩阵:Optionally, a plurality of visible light cameras are used to form a camera matrix to collect biometric information of the living body, and the camera matrix is arranged by:
搭建支撑结构,在所述支撑结构上设置弧形承载结构;Constructing a support structure, and setting an arc-shaped load-bearing structure on the support structure;
将多台可见光相机布置在所述弧形承载结构上。A plurality of visible light cameras are disposed on the curved load bearing structure.
可选的,所述生物体为人体,所述身份信息包括:姓名、性别、年龄和证件号中的一种或多种。Optionally, the living body is a human body, and the identity information includes one or more of a name, a gender, an age, and a document number.
可选的,所述证件号包括身份证号、护照号、驾照号、社保号或军官证号中的一种或多种。Optionally, the document number includes one or more of an ID number, a passport number, a driver's license number, a social security number, or a military officer number.
可选的,所述生物特征信息为头部信息、面部信息和/或虹膜信息,则所述方法还包括:Optionally, the biometric information is head information, facial information, and/or iris information, and the method further includes:
搭建与所述支撑结构连接的底座,在所述底座上设置用于人体拍照位置的座椅;Constructing a base connected to the supporting structure, and setting a seat for taking a photographing position of the human body on the base;
当人体位于所述座椅上时,利用布置在所述弧形承载结构上的多台可见光 相机组成的相机矩阵对人体的头部信息、面部信息和/或虹膜信息进行采集。When the human body is positioned on the seat, the head information, facial information, and/or iris information of the human body is collected using a camera matrix composed of a plurality of visible light cameras disposed on the curved load bearing structure.
可选的,在所述弧形承载结构上设置显示器;Optionally, a display is disposed on the curved carrying structure;
在构建得到头部、面部和/或虹膜的3D模型后,在显示器上通过可视化方式显示3D数据;After constructing a 3D model of the head, face and/or iris, visually displaying the 3D data on the display;
在利用多台可见光相机组成的相机矩阵对头部信息、面部信息和/或虹膜信息进行采集之前,通过显示器界面,设定各台可见光相机的拍照参数。Before the head information, the face information, and/or the iris information are collected by using a camera matrix composed of a plurality of visible light cameras, the photographing parameters of each visible light camera are set through the display interface.
本发明实施例提供了一种基于可见光拍照的生物特征3D数据识别方法和系统,在方法中具体是利用多台可见光相机组成的相机矩阵对生物特征信息进行采集,得到多幅生物特征图像;进而对多幅生物特征图像进行处理,提取多幅生物特征图像中各自的特征点;随后,基于提取的多幅生物特征图像中各自的特征点,生成生物特征的特征点云数据;之后,根据特征点云数据构建生物特征的3D模型,以实现生物特征3D数据的采集。可以看到,本发明实施例采用多台可见光相机控制技术进行生物特征信息的采集,可以显著提高生物特征信息的采集效率;并且,本发明实施例利用采集到生物特征在空间上的特征信息,完整地复原生物特征在空间上的各项特征,为后续的生物特征数据的应用提供了无限的可能性。以识别目标的身份信息识别3D数据,不必将目标人的数据与数据库中的海量数据进行逐一比对,提高了比对识别的效率,大大提升了身份识别的速度,采用基于空域直接匹配的天目点云比对识别法进行特征点拟合,实现了生物特征点的快速拟合比对,进而实现了身份快速认证识别。The embodiment of the invention provides a biometric 3D data recognition method and system based on visible light photographing. In the method, the biometric information is collected by using a camera matrix composed of multiple visible light cameras to obtain a plurality of biometric images; Processing a plurality of biometric images to extract respective feature points in the plurality of biometric images; and subsequently generating feature point cloud data of the biometrics based on the respective feature points in the extracted plurality of biometric images; Point cloud data constructs a 3D model of biometrics to enable acquisition of biometric 3D data. It can be seen that the embodiment of the present invention uses multiple visible light camera control technologies to collect biometric information, which can significantly improve the collection efficiency of biometric information. Moreover, the embodiment of the present invention utilizes the feature information of the biometrics collected in space. The complete restoration of the spatial characteristics of biometrics provides unlimited possibilities for the subsequent application of biometric data. The 3D data is identified by identifying the target identity information, and the target person's data does not need to be compared with the massive data in the database one by one, thereby improving the efficiency of the comparison recognition, greatly improving the speed of the identification, and adopting the sky-based direct matching-based Tianmu. The point cloud fitting method is used to fit the feature points, and the fast fitting comparison of the biometric points is realized, thereby realizing the identity identification and identification.
附图说明DRAWINGS
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本发明的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:Various other advantages and benefits will become apparent to those skilled in the art from a The drawings are only for the purpose of illustrating the preferred embodiments and are not to be construed as limiting. Throughout the drawings, the same reference numerals are used to refer to the same parts. In the drawing:
图1示出了根据本发明一实施例的基于可见光拍照的生物特征3D数据识别方法流程图;1 is a flow chart showing a biometric 3D data recognition method based on visible light photographing according to an embodiment of the invention;
图2示出了根据本发明一实施例基于可见光拍照的生物特征3D数据采集方法流程图;2 is a flow chart showing a biometric 3D data acquisition method based on visible light photographing according to an embodiment of the invention;
图3示出了根据本发明一实施例的头部信息、面部信息和/或虹膜信息的3D数据识别系统的示意图;3 shows a schematic diagram of a 3D data recognition system for head information, face information, and/or iris information, in accordance with an embodiment of the present invention;
图4示出了图3所示的3D数据识别系统中承载结构的内部模块和外部的 连接的示意图;4 is a schematic diagram showing the connection of an internal module and an external connection of a bearer structure in the 3D data identification system shown in FIG. 3;
图5示出了图3所示的3D数据识别系统中串口集成模块、相机矩阵和中央处理模块的连接的示意图;5 is a schematic diagram showing the connection of a serial port integration module, a camera matrix, and a central processing module in the 3D data identification system shown in FIG. 3;
图6示出了根据本发明另一实施例的3D数据识别系统设备的示意图;6 shows a schematic diagram of a 3D data identification system device in accordance with another embodiment of the present invention;
图7示出了根据本发明一实施例的3D数据采集装置的结构示意图;以及FIG. 7 is a block diagram showing the structure of a 3D data collection device according to an embodiment of the present invention;
图8示出了根据本发明另一实施例的3D数据采集装置的结构示意图。FIG. 8 is a block diagram showing the structure of a 3D data collection device according to another embodiment of the present invention.
具体实施方式Detailed ways
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the embodiments of the present invention have been shown in the drawings, the embodiments Rather, these embodiments are provided so that this disclosure will be more fully understood and the scope of the disclosure will be fully disclosed.
为解决上述技术问题,本发明实施例提供了一种基于可见光拍照的生物特征3D数据识别方法。图1示出了根据本发明一实施例的基于可见光拍照的生物特征3D数据识别方法的流程图:To solve the above technical problem, an embodiment of the present invention provides a biometric 3D data recognition method based on visible light photographing. FIG. 1 is a flow chart showing a biometric 3D data recognition method based on visible light photographing according to an embodiment of the present invention:
S01.采集生物特征信息,S01. Collecting biometric information,
通过可见光相机采集生物体的多幅生物特征图像,根据多幅生物特征图像构建生物特征的3D模型,以实现生物体的生物特征3D数据采集;Collecting a plurality of biometric images of the living body through a visible light camera, and constructing a 3D model of the biometrics according to the plurality of biometric images to realize biometric 3D data collection of the living body;
S02.存储生物特征3D数据,S02. storing biometric 3D data,
以生物体的身份信息(I1、I2…In)作为识别标志对采集到的生物特征3D数据进行存储,形成包括多条生物特征3D数据(D1、D2…Dn)的数据库;Collecting the collected biometric 3D data by using the identity information (I1, I2...In) of the living body as an identification mark to form a database including a plurality of biometric 3D data (D1, D2...Dn);
S03.目标生物体的身份识别,S03. Identification of the target organism,
采集目标生物体的生物特征3D数据(T1、T2…Tn),利用目标生物体的身份信息(I1、I2…In)找到数据库中存储的生物特征3D数据(D1、D2…Dn),将目标生物体的生物特征3D数据(T1、T2…Tn)分别与相应的数据库中存储的生物特征3D数据(D1、D2…Dn)进行比对,以识别目标生物体的身份。Collect biometric 3D data (T1, T2...Tn) of the target organism, and use the identity information (I1, I2...In) of the target organism to find the biometric 3D data (D1, D2...Dn) stored in the database, and target the target The biometric 3D data (T1, T2...Tn) of the organism are respectively compared with the biometric 3D data (D1, D2...Dn) stored in the corresponding database to identify the identity of the target organism.
优选的,如图2所示,步骤S01采集生物特征信息还可以具体包括以下步骤S102至步骤S108。Preferably, as shown in FIG. 2, the collecting of the biometric information in step S01 may further include the following steps S102 to S108.
步骤S102,利用多台可见光相机对生物特征信息进行采集,得到多幅生物特征图像,优选的,多台可见光相机组成相机矩阵对生物体进行采集;Step S102, using multiple visible light cameras to collect biometric information to obtain a plurality of biometric images. Preferably, multiple visible light cameras form a camera matrix to collect the living body;
步骤S104,对多幅生物特征图像进行处理,提取多幅生物特征图像中各自的特征点;Step S104, processing a plurality of biometric images to extract respective feature points in the plurality of biometric images;
步骤S106,基于提取的多幅生物特征图像中各自的特征点,生成生物特征的特征点云数据;Step S106, generating feature point cloud data of the biometric feature based on the respective feature points in the extracted plurality of biometric images;
步骤S108,根据特征点云数据构建生物特征的3D模型,以实现生物体生物特征3D数据的采集。Step S108: Construct a 3D model of the biometric feature according to the feature point cloud data to implement the collection of the biometric 3D data of the living body.
本实施例采用多台可见光相机控制技术进行生物特征信息的采集,可以显著提高生物特征信息的采集效率;并且,本发明实施例利用采集到生物特征在空间上的特征信息,完整地复原生物特征在空间上的各项特征,为后续的生物特征数据的应用提供了无限的可能性。In this embodiment, the collection of biometric information is performed by using multiple visible light camera control technologies, and the collection efficiency of the biometric information can be significantly improved. Moreover, the embodiment of the present invention utilizes the collected feature information of the biometrics in space to completely restore the biometric features. The various features in space provide unlimited possibilities for the subsequent application of biometric data.
在本发明的另一实施例中,可以使用一台相机进行生物特征信息采集,这时,该台相机可以沿着预定轨道转一圈拍摄,从而实现对生物特征信息的多角度拍摄,获得多幅生物特征图像。In another embodiment of the present invention, a camera can be used for biometric information collection. At this time, the camera can be rotated one turn along a predetermined track, thereby realizing multi-angle shooting of biometric information. Biometric image.
在本发明的可选实施例中,上文步骤S106中基于提取的多幅生物特征图像中各自的特征点,生成生物特征的特征点云数据,具体可以是包括以下步骤S1061至步骤S1063。In an optional embodiment of the present invention, the feature point cloud data of the biometric feature is generated based on the respective feature points in the extracted plurality of biometric images in the step S106, which may include the following steps S1061 to S1063.
步骤S1061,根据提取的多幅生物特征图像中各自的特征点的特征,进行特征点的匹配,建立匹配的特征点数据集。Step S1061: Perform matching of feature points according to characteristics of respective feature points in the extracted plurality of biometric images, and establish a matching feature point data set.
步骤S1062,根据多台可见光相机的光学信息,计算各台相机相对于生物特征在空间上的相对位置,并根据相对位置计算出多幅生物特征图像中的特征点的空间深度信息。Step S1062, calculating spatial relative positions of the cameras relative to the biometrics according to the optical information of the plurality of visible light cameras, and calculating spatial depth information of the feature points in the plurality of biometric images according to the relative positions.
步骤S1063,根据匹配的特征点数据集和特征点的空间深度信息,生成生物特征的特征点云数据。Step S1063: Generate feature point cloud data of the biometric feature according to the matched feature point data set and the spatial depth information of the feature point.
在上面的步骤S1061中,多幅生物特征图像中各自的特征点的特征可以采用SIFT(Scale-Invariant Feature Transform,尺度不变特征转换)特征描述子来描述。SIFT特征描述子具有128个特征描述向量,可以在方向和尺度上描述任何特征点的128个方面的特征,显著提高对特征描述的精度,同时特征描述子具有空间上的独立性。In the above step S1061, the features of the respective feature points in the plurality of biometric images may be described by using a SIFT (Scale-Invariant Feature Transform) feature descriptor. The SIFT feature descriptor has 128 feature description vectors, which can describe the 128 aspects of any feature point in direction and scale, significantly improve the accuracy of feature description, and the feature descriptor has spatial independence.
在步骤S1062中,根据多台可见光相机的光学信息,计算各台相机相对于生物特征在空间上的相对位置,本发明实施例提供了一种可选的方案,在该方案中,可以根据多台可见光相机的光学信息,采用光束平差法计算各台相机相对于生物特征在空间上的相对位置。In the step S1062, the spatial relative position of each camera relative to the biometric feature is calculated according to the optical information of the plurality of visible light cameras. The embodiment of the present invention provides an optional solution, in which, according to the solution, The optical information of the visible light camera is calculated by the beam adjustment method to determine the relative position of each camera relative to the biological features in space.
在光束平差法的定义中,假设有一个3D空间中的点,它被位于不同位置的多个相机看到,那么光束平差法就是能够从这些多视角信息中提取出3D点 的坐标以及各个相机的相对位置和光学信息的过程。In the definition of the beam adjustment method, assuming that there is a point in the 3D space, which is seen by multiple cameras located at different positions, the beam adjustment method is able to extract the coordinates of the 3D point from the multi-view information and The relative position of each camera and the process of optical information.
进一步地,步骤S1062中提及的多幅生物特征图像中的特征点的空间深度信息可以包括:空间位置信息和颜色信息,即,可以是特征点在空间位置的X轴坐标、特征点在空间位置的Y轴坐标、特征点在空间位置的Z轴坐标、特征点的颜色信息的R通道的值、特征点的颜色信息的G通道的值、特征点的颜色信息的B通道的值、特征点的颜色信息的Alpha通道的值等等。这样,生成的特征点云数据中包含了特征点的空间位置信息和颜色信息,特征点云数据的格式可以如下所示:Further, the spatial depth information of the feature points in the plurality of biometric images mentioned in step S1062 may include: spatial position information and color information, that is, may be an X-axis coordinate of the feature point in the spatial position, and the feature point is in the space The Y-axis coordinate of the position, the Z-axis coordinate of the feature point at the spatial position, the value of the R channel of the color information of the feature point, the value of the G channel of the color information of the feature point, the value of the B channel of the color information of the feature point, and the feature The value of the alpha channel of the color information of the point, and so on. In this way, the generated feature point cloud data includes spatial location information and color information of the feature points, and the format of the feature point cloud data can be as follows:
X1 Y1 Z1 R1 G1 B1 A1X1 Y1 Z1 R1 G1 B1 A1
X2 Y2 Z2 R2 G2 B2 A2X2 Y2 Z2 R2 G2 B2 A2
……......
Xn Yn Zn Rn Gn Bn AnXn Yn Zn Rn Gn Bn An
其中,Xn表示特征点在空间位置的X轴坐标;Yn表示特征点在空间位置的Y轴坐标;Zn表示特征点在空间位置的Z轴坐标;Rn表示特征点的颜色信息的R通道的值;Gn表示特征点的颜色信息的G通道的值;Bn表示特征点的颜色信息的B通道的值;An表示特征点的颜色信息的Alpha通道的值。Where Xn represents the X-axis coordinate of the feature point at the spatial position; Yn represents the Y-axis coordinate of the feature point at the spatial position; Zn represents the Z-axis coordinate of the feature point at the spatial position; and Rn represents the value of the R-channel of the color information of the feature point. ; Gn represents the value of the G channel of the color information of the feature point; Bn represents the value of the B channel of the color information of the feature point; and An represents the value of the alpha channel of the color information of the feature point.
在本发明实施例中,平面2D的生物特征加上时间的维度,构成3D生物特征,完整地复原生物特征在空间上的各项特征,为后续的生物特征数据的应用提供了无限的可能性。In the embodiment of the present invention, the biological feature of the plane 2D plus the time dimension constitutes a 3D biological feature, and completely restores various features of the biological feature in space, providing unlimited possibilities for subsequent application of biometric data. .
在本发明的可选实施例中,上文步骤S108中根据特征点云数据构建生物特征的3D模型,具体可以是设定待构建的3D模型的参考尺寸;进而根据参考尺寸和特征点云数据的空间位置信息,确定特征点云数据中各个特征点的空间尺寸,从而构建生物特征的3D模型。In an optional embodiment of the present invention, the 3D model of the biometric feature is constructed according to the feature point cloud data in step S108 above, specifically, the reference size of the 3D model to be constructed is set; and then the reference size and feature point cloud data are further determined according to the reference size and the feature point cloud data. The spatial position information determines the spatial size of each feature point in the feature point cloud data, thereby constructing a 3D model of the biometric.
在构建的生物特征的3D模型中可以包括描述3D模型的空间形状特征数据、描述3D模型的表面纹理特征数据、描述3D模型的表面材质和灯光特征数据等3D数据,本发明实施例对此不作限制。In the 3D model of the constructed biometrics, 3D data describing spatial shape feature data of the 3D model, surface texture feature data describing the 3D model, surface material describing the 3D model, and lighting feature data may be included in the embodiment of the present invention. limit.
在本发明的可选实施例中,还可以记录多台可见光相机采集生物特征信息的时间数据,从而根据特征点云数据和时间数据,构建具有时间维度的生物特征的3D模型,以实现生物特征四维数据的采集。这里的四维数据可以是多张相同时间间隔或不同时间间隔、不同角度、不同方位、不同表情形态等的3D数据集合。In an optional embodiment of the present invention, time data of collecting biometric information by multiple visible light cameras may also be recorded, thereby constructing a 3D model of biometrics with time dimension according to feature point cloud data and time data to realize biometrics. Collection of four-dimensional data. The four-dimensional data here may be a plurality of 3D data sets of the same time interval or different time intervals, different angles, different orientations, different expression forms, and the like.
在本发明的可选实施例中,在上文步骤S102中利用多台可见光相机组成 的相机矩阵对生物特征信息进行采集之前,还可以布局多台可见光相机,布局多台可见光相机的方法可以包括以下步骤S202至步骤S204。In an optional embodiment of the present invention, before collecting biometric information by using a camera matrix composed of multiple visible light cameras in step S102, a plurality of visible light cameras may be disposed, and a method of arranging multiple visible light cameras may include The following steps S202 to S204.
步骤S202,搭建支撑结构,在支撑结构上设置弧形承载结构;以及Step S202, constructing a support structure, and setting an arc-shaped load-bearing structure on the support structure;
步骤S204,将多台可见光相机布置在弧形承载结构上。Step S204, placing a plurality of visible light cameras on the curved load bearing structure.
可以看到,本发明实施例采用多台可见光相机控制技术进行生物特征信息的采集,可以显著提高生物特征信息的采集效率。并且,多台相机布置在弧形承载结构上形成相机矩阵。It can be seen that the embodiment of the present invention uses multiple visible light camera control technologies to collect biometric information, which can significantly improve the collection efficiency of biometric information. Also, a plurality of cameras are arranged to form a camera matrix on the curved load bearing structure.
进一步地,当需要采集的生物特征不同时,步骤S102的具体的采集方式也有所不同,下面将分别进行详细介绍。Further, when the biometrics to be collected are different, the specific collection manner of step S102 is also different, which will be described in detail below.
情况一,若生物特征信息为人体的头部信息、面部信息和/或虹膜信息,则可以搭建与支撑结构连接的底座,在底座上设置用于固定人体拍照位置的座椅;当人位于座椅上时,利用布置在弧形承载结构上的多台可见光相机组成的相机矩阵对其头部、面部和/或虹膜信息进行采集。Case 1, if the biometric information is the head information, the facial information, and/or the iris information of the human body, the base connected to the support structure may be built, and the seat for fixing the photographed position of the human body is set on the base; when the person is seated On the chair, the head, face and/or iris information is acquired using a camera matrix consisting of multiple visible light cameras arranged on the curved load bearing structure.
在可选的实施例中,还可以在弧形承载结构上设置显示器;在构建得到头部面部的3D模型后,在显示器上通过可视化方式显示头部面部3D数据。In an alternative embodiment, the display can also be placed on the curved load bearing structure; after the 3D model of the head face is constructed, the head face 3D data is visually displayed on the display.
在可选的实施例中,在利用多台可见光相机组成的相机矩阵对头部面部信息进行采集之前,还可以通过显示器界面,设定各台相机的拍照参数,如感光度、快门速度、变焦倍数、光圈等,本发明实施例不限于此。In an optional embodiment, before the head and face information is collected by using a camera matrix composed of multiple visible light cameras, camera parameters such as sensitivity, shutter speed, and zoom can be set through the display interface. The embodiment of the present invention is not limited to the multiple, the aperture, and the like.
具体实施例一Specific embodiment 1
(1-1)头部面部可见光相机3D数据采集设备设计思路如下。(1-1) Head and face visible light camera 3D data acquisition device design ideas are as follows.
A.头部面部可见光相机3D数据采集设备如图3所示,该设备可以包括:A. Head and Face Visible Light Camera 3D Data Acquisition Device As shown in FIG. 3, the device may include:
底座31,作为整个发明设备的主要的底部支撑结构;The base 31 serves as a main bottom support structure for the entire inventive device;
座椅32,固定拍照人体位置和调节人体高度;The seat 32 fixes the position of the human body and adjusts the height of the human body;
支撑结构33,连接设备的底部和其他主体机构; Support structure 33, connecting the bottom of the device and other body mechanisms;
显示器34,设备系统工作的操作界面; Display 34, an operation interface for the operation of the device system;
承载结构35,相机、中央处理器、灯光的固定结构;Carrying structure 35, fixed structure of camera, central processing unit and light;
相机矩阵36,人体头部面部信息采集; Camera matrix 36, facial information collection of the human head;
带状补光灯37,环境灯光补充使用。Strip fill light 37, ambient light is used in addition.
B.设备的连接关系说明B. Description of the connection relationship of the device
底座31通过连接结构和座椅32相连;The base 31 is connected to the seat 32 through a connecting structure;
底座31通过机构连接结构和支撑结构33相连;The base 31 is connected to the support structure 33 through a mechanism connection structure;
支撑结构33通过机械连接结构和承载结构35相连;The support structure 33 is connected to the support structure 35 by a mechanical connection structure;
显示器34通过机械固定在承载结构35上;The display 34 is mechanically fixed to the carrying structure 35;
相机矩阵36通过结构固定的方式固定在承载结构35上;The camera matrix 36 is fixed on the carrying structure 35 by structural fixing;
带状补光灯37通过结构固定的方式固定在承载结构35上。The band fill lamp 37 is fixed to the carrier structure 35 by means of a structural fixing.
(1-2)承载结构35的内部模块组成如下。(1-2) The internal modules of the load-bearing structure 35 are composed as follows.
A.如图4所示,承载结构35的内部模块可以由如下几部分组成:A. As shown in FIG. 4, the internal module of the load bearing structure 35 can be composed of the following parts:
电源管理模块,负责提供整个系统的所需的各种电源;A power management module that is responsible for providing the various power supplies required for the entire system;
灯光管理模块,通过中央处理模块可以调整灯光的亮度;The light management module can adjust the brightness of the light through the central processing module;
串口集成模块,负责中央处理模块和相机矩阵的双向通讯;The serial port integration module is responsible for two-way communication between the central processing module and the camera matrix;
中央处理模块,负责系统信息处理、显示、灯光、座椅的控制;Central processing module, responsible for system information processing, display, lighting, seat control;
座椅升降管理模块,负责座椅高度调整;Seat lift management module, responsible for seat height adjustment;
显示驱动管理模块,负责显示器的显示驱动。The display driver management module is responsible for the display driver of the display.
B.承载结构35的内部模块以及外部的连接关系如下:B. The internal modules of the bearer structure 35 and the external connections are as follows:
1)电源管理模块向相机矩阵、串口集成模块、灯光管理模块、中央处理模块、显示驱动管理模块、座椅升降管理模块提供电源;1) The power management module provides power to the camera matrix, the serial port integration module, the light management module, the central processing module, the display drive management module, and the seat lift management module;
2)串口集成模块连接相机矩阵和中央处理模块,实现它们之间的双向通讯,如图5所示;2) The serial port integration module connects the camera matrix and the central processing module to realize two-way communication between them, as shown in FIG. 5;
2.1)相机以单独个体的方式以串口的方式和串口集成模块连接2.1) The camera is connected to the serial port integration module in a single serial manner.
2.2)串口集成模块通过USB接口和中央处理模块连接2.2) Serial port integration module is connected to the central processing module via USB interface
2.3)中央处理模块通过定制开发的软件界面实现和相机矩阵的可视化操作2.3) The central processing module realizes the visualization operation of the camera matrix through the customized development software interface
2.4)操作界面上可以实现对相机拍照参数的设置2.4) The camera interface parameters can be set on the operation interface.
感光度ISO(范围50~6400)Sensitivity ISO (range 50 ~ 6400)
快门速度(1/4000~1/2)(秒)Shutter speed (1/4000 to 1/2) (seconds)
变焦倍数(1~3.8x)Zoom factor (1 to 3.8x)
光圈(大/小)Aperture (large/small)
2.5)操作界面可以实现对相机开机的初始化操作2.5) The operation interface can realize the initialization operation of turning on the camera.
2.6)操作界面可以实现相机影像采集的命令2.6) Operation interface can realize the command of camera image acquisition
2.7)操作界面可以实现相机影像储存路径的设置2.7) The operation interface can realize the setting of camera image storage path
2.8)操作界面可以实现相机实时影像的浏览以及相机的切换2.8) Operation interface can realize real-time camera browsing and camera switching
3)灯光管理模块连接电源管理模块、中央处理模块以及外部带状补光灯;3) The light management module is connected to the power management module, the central processing module, and the external band fill light;
4)座椅升降管理模块连接电源管理模块、中央处理模块以及外部座椅,中央处理模块通过可视化界面实现对座椅高度的上下调节;4) The seat lift management module is connected to the power management module, the central processing module and the external seat, and the central processing module realizes the up and down adjustment of the seat height through the visual interface;
5)显示驱动管理模块连接电源管理模块、中央处理模块及外部的显示器;5) The display driver management module is connected to the power management module, the central processing module, and an external display;
6)中央处理模块连接电源管理模块、灯光管理模块、座椅升降管理模块、串口集成模块、显示驱动管理模块。6) The central processing module is connected to the power management module, the light management module, the seat lift management module, the serial port integration module, and the display drive management module.
(1-3)设备使用方法如下(1-3) The device is used as follows
A.启动设备:打开电源开关后,中央处理器,相机矩阵,带状补光灯分别启动。A. Start the device: After turning on the power switch, the central processing unit, camera matrix, and band fill light are activated separately.
B.参数设定:通过显示器界面,可以设定相机矩阵拍照的各项参数。B. Parameter setting: Through the display interface, various parameters of the camera matrix can be set.
C.信息采集:参数设定完毕后,启动矩阵相机开始对人体头部面部进行信息采集,信息采集时间在0.8秒内完成,采集的信号最后以数字图像(.jpg)的格式传至中央处理模块进行处理,中央处理模块核心由以下几个部分组成:C. Information collection: After the parameters are set, the startup matrix camera starts to collect information on the human head and face. The information collection time is completed within 0.8 seconds, and the collected signals are finally transmitted to the central processing in the format of digital image (.jpg). The module is processed, and the core of the central processing module consists of the following parts:
C.1 CPU(Central Processing Unit,中央处理单元):负责整个数字信号的传送调度,任务分配,内存管理,以及部分单一的计算处理;C.1 CPU (Central Processing Unit): responsible for the transmission scheduling of the entire digital signal, task allocation, memory management, and some single calculation processing;
C.2 GPU(Graphics Processing Unit,图像处理单元):选用特殊型号的GPU,具有优秀的图像处理能力和高效的计算能力;C.2 GPU (Graphics Processing Unit): Selects a special model GPU with excellent image processing capabilities and efficient computing capabilities.
C.3 DRAM(Dynamic Random Access Memory,即动态随机存取存储器):作为整个数字信号处理的暂时存储中心,需要匹配CPU和GPU的运算能力,得到最佳的处理和计算效能。C.3 DRAM (Dynamic Random Access Memory): As a temporary storage center for digital signal processing, it needs to match the computing power of CPU and GPU to get the best processing and computing performance.
D.信息处理:矩阵相机采集完的信号传送到中央处理模块进行信号处理。D. Information Processing: The signal collected by the matrix camera is transmitted to the central processing module for signal processing.
D.1信息处理的过程如下D.1 information processing process is as follows
D.1.1采集图像的滤波D.1.1 Filtering the acquired image
利用GPU的特性,结合图像滤波的矩阵运算子的特性,图像滤波可以在一定算法的支持下,快速完成。Using the characteristics of the GPU, combined with the characteristics of the matrix operator of image filtering, image filtering can be completed quickly with the support of certain algorithms.
D.1.2采集图像的特征点提取D.1.2 Feature point extraction of acquired images
采用CPU和与整体性能相匹配的GPU,因为本设备的各种信息的格式都是图像格式,结合具有优秀图像处理能力的GPU,可以将jpg的各种信息内容均匀的分配到GPU的block中,由于本设备采用双GPU,每颗GPU本身具有56个block,所以采集信息抓取到的18张jpg的图像会均匀的分配到112个block上面进行运算,并结合CPU的集中调度和分配功能,可以快速地计算出每张照片具有的特征点,相对于单独CPU或者CPU搭配其他普通型号的GPU的运算,整体的运算速度时间是后者的1/10或者更短。Using CPU and GPU matching the overall performance, because the format of various information of this device is image format, combined with GPU with excellent image processing capability, the information content of jpg can be evenly distributed to the block of GPU. Since the device uses dual GPUs, each GPU has 56 blocks, so the 18 jpg images captured by the acquisition information are evenly distributed to 112 blocks for calculation, combined with the centralized scheduling and allocation functions of the CPU. It can quickly calculate the feature points of each photo. Compared with the operation of a separate CPU or CPU with other common models of GPUs, the overall operation speed time is 1/10 or less of the latter.
D.1.3采集图像的匹配和空间深度信息的计算D.1.3 Acquisition of image matching and calculation of spatial depth information
图像特征点的提取采用金字塔的层级结构,以及空间尺度不变性的特殊算法,这两种特殊的算法都是结合本设备选用的GPU的特殊构造,最大程度的发 挥系统的计算性能,实现快速提取图像信息中的特征点。Image feature points are extracted using the hierarchical structure of the pyramid and the special algorithm of spatial scale invariance. These two special algorithms are combined with the special structure of the GPU selected by the device to maximize the computing performance of the system and achieve fast extraction. Feature points in image information.
此过程的特征描述子采用SIFT特征描述子,SIFT特征描述子具有128个特征描述向量,可以在方向和尺度上描述任何特征点的128个方面的特征,显著提高对特征描述的精度,同时特征描述子具有空间上的独立性。The feature descriptor of this process uses SIFT feature descriptor. The SIFT feature descriptor has 128 feature description vectors, which can describe the 128 features of any feature point in direction and scale, and significantly improve the accuracy of feature description. The descriptor has spatial independence.
本设备采用的特殊图像处理GPU,具有优异的单独向量的计算和处理能力,对于采用128个特殊描述子的SIFT特征向量来讲,在这样特殊GPU的条件下来处理是最适合不过了,可以充分发挥该GPU的特殊计算能力,比较采用普通CPU或者CPU搭配其他普通规格的GPU,特征点的匹配时间会降低70%。The special image processing GPU used in this device has excellent calculation and processing capabilities of individual vectors. For SIFT feature vectors with 128 special descriptors, it is most suitable for processing under the conditions of such special GPUs. To take advantage of the special computing power of the GPU, compare the normal CPU or CPU with other common GPUs, and the matching time of feature points will be reduced by 70%.
特征点匹配完毕,系统会采用光束平差法的算法计算出相机相对于头部面部在空间上的相对位置,根据此相对位置的空间坐标,GPU可以快速地计算出头部面部特征点的深度信息。After the feature points are matched, the system uses the algorithm of the beam adjustment method to calculate the relative position of the camera relative to the head face. According to the spatial coordinates of the relative position, the GPU can quickly calculate the depth of the feature points of the head face. information.
D.1.4特征点云数据的生成D.1.4 Generation of feature point cloud data
根据D.1.3计算出头部面部特征点在空间的深度信息,由于GPU具有的向量计算能力,可以快速地匹配出头部面部特征点云的空间位置和颜色信息,形成一个标准的模型建立需要的点云信息。According to D.1.3, the depth information of the head and face feature points in space is calculated. Due to the vector computing capability of the GPU, the spatial position and color information of the head face feature point cloud can be quickly matched to form a standard model establishment requirement. Point cloud information.
E.特征尺寸标定:通过特征点云尺寸的标准,为整个模型的尺寸设定最初的参考尺寸。E. Feature Size Calibration: The initial reference size is set for the size of the entire model by the criteria of the feature point cloud size.
通过在信息采集上的特殊标定,该特殊标定具有空间确定尺寸,由于头部面部特征点云具有空间上尺度一致性,通过该特殊标定的确定尺寸,头部面部的任何特征点之间的尺寸可以从点云的空间位置坐标计算得到。Through special calibration on the information collection, the special calibration has a spatially determined size, and since the head facial feature point cloud has spatially uniform dimensionality, the size between any feature points of the head face is determined by the special calibration of the size. It can be calculated from the spatial position coordinates of the point cloud.
F.数据的后续处理:基于E中标定的尺寸,通过对点云数据进行进一步的处理,可以得到人脸头部面部的3D数据。F. Subsequent processing of data: Based on the calibrated size in E, 3D data of the face and face of the face can be obtained by further processing the point cloud data.
3D数据的格式有如下几个文件:The format of 3D data has the following files:
.obj——描述3D模型的空间形状特征.obj - describes the spatial shape characteristics of 3D models
.jpg——描述3D模型的表面纹理特征.jpg - Describe the surface texture features of 3D models
.mtl——描述3D模型的表面材质和灯光特征.mtl - describes the surface material and lighting characteristics of the 3D model
G.头部面部3D数据通过可视化的方法显示在显示器上。G. Head face 3D data is displayed on the display by a visual method.
具体实施例二Specific embodiment 2
(2-1)人体手部可见光相机3D数据采集设备设计思路如下。(2-1) The design idea of the human hand visible light camera 3D data acquisition device is as follows.
A.人体手部可见光相机3D数据采集设备可以包括:A. Human hand visible light camera 3D data acquisition device can include:
柜体61,作为整个设备的主要的主体支撑结构;The cabinet 61 serves as the main body support structure of the entire device;
相机矩阵62,采集人体手部特征,具体可以是指部特征和/或掌部特征;The camera matrix 62 collects human hand features, which may specifically be finger features and/or palm features;
透明玻璃盖板63,人体手部的放置装置;Transparent glass cover plate 63, a placement device for the human hand;
中央控制模块64,系统的信息处理、分析、显示模块; Central control module 64, system information processing, analysis, display module;
手部虚拟位置65,人体手部的放置位置说明;Hand virtual position 65, description of the placement position of the human hand;
无影灯光系统66,为手部3D建模提供灯光环境。The shadowless lighting system 66 provides a lighting environment for hand 3D modeling.
B.设备的连接关系说明B. Description of the connection relationship of the device
柜体61通过机械固定的方式和相机矩阵62相连;The cabinet 61 is connected to the camera matrix 62 by mechanical fixing;
柜体61通过机械结构固定的方式和透明玻璃盖板63相连;The cabinet 61 is connected to the transparent glass cover 63 by means of mechanical structure fixing;
中央控制模块64通过机械固定的方式和柜体61相连;The central control module 64 is connected to the cabinet 61 by mechanical fixing;
无影灯光系统66通过机械固定的方式和柜体61相连;The shadowless lighting system 66 is connected to the cabinet 61 by mechanical fixing;
手部虚拟位置65确保人体手部整体落在相机矩阵62信息采集的范围内。The hand virtual position 65 ensures that the human hand as a whole falls within the range of information acquisition of the camera matrix 62.
(2-2)中央控制模块64和外部的连接。(2-2) Central control module 64 and external connection.
A.整个中央控制模块由如下几个部分组成:A. The entire central control module consists of the following components:
电源管理模块,提供整个设备的电源;a power management module that provides power to the entire device;
串口集成模块,负责相机矩阵和中央处理模块的命令和数据传递;Serial port integration module, responsible for command and data transfer of camera matrix and central processing module;
灯光管理模块,负责管理外部无影灯光系统的;The lighting management module is responsible for managing the external shadowless lighting system;
显示驱动模块,负责显示模块的管理;Display driver module responsible for management of the display module;
中央处理模块,整个系统的采集数据的分析、计算和处理;Central processing module, analysis, calculation and processing of collected data of the entire system;
显示模块,系统操作界面。Display module, system operation interface.
B.整个中央控制模块的连接关系如下所示:B. The connection relationship of the entire central control module is as follows:
B.1电源管理模块提供电源给相机矩阵、串口集成模块、中央处理模块、灯光管理模块、显示驱动模块、显示模块;B.1 power management module provides power to the camera matrix, serial port integration module, central processing module, lighting management module, display driver module, display module;
B.2串口集成模块实现相机矩阵和中央处理模块之间的双向通讯;The B.2 serial port integration module realizes two-way communication between the camera matrix and the central processing module;
B.3灯光管理模块提供电源给无影灯光系统,并负责调整灯光系统的参数;B.3 The lighting management module provides power to the shadowless lighting system and is responsible for adjusting the parameters of the lighting system;
B.4中央处理模块连接串口集成模块、灯光管理模块、显示驱动模块和电源管理模块;B.4 central processing module is connected to the serial port integration module, the light management module, the display driver module and the power management module;
B.5显示驱动模块连接中央处理模块和显示模块。B.5 The display driver module is connected to the central processing module and the display module.
(2-3)设备使用方法如下(2-3) The device is used as follows
A.启动设备:打开电源开关后,启动电源管理模块给系统各个模块提供电源,并同时启动相机矩阵、中央控制模块、无影灯光系统以及显示模块。A. Start device: After turning on the power switch, start the power management module to supply power to each module of the system, and simultaneously start the camera matrix, central control module, shadowless lighting system and display module.
B.人体手部放置:将人体的手部放置在透明玻璃盖板上,通过调整手部的位置,使手部的信息全部落在信息采集的方位内,由于本设备的灯光系统采用 无影灯光系统,各个角度采集的手部信息没有阴影,可以显著提高特征点采集的效率和准确度。B. Human hand placement: Place the human hand on the transparent glass cover. By adjusting the position of the hand, all the information of the hand falls within the orientation of the information collection. Because the lighting system of the device uses shadowless lighting. In the system, the hand information collected at various angles has no shadow, which can significantly improve the efficiency and accuracy of feature point collection.
C.参数设置:通过显示器界面,可以设定相机矩阵拍照的各项参数。C. Parameter setting: Through the display interface, various parameters of the camera matrix can be set.
D.信息采集:参数设置完毕,启动相机矩阵开始对手部的信息进行采集,采集的信息会以图片的格式传到中央控制模块进行分析和处理。D. Information collection: After the parameters are set, the camera matrix is started to start the information of the opponent, and the collected information will be transmitted to the central control module for analysis and processing in the format of the picture.
E.信息处理:相机矩阵采集完的信号传送到中央控制模块进行信号处理,信息处理的过程如下。E. Information processing: The signal collected by the camera matrix is transmitted to the central control module for signal processing. The process of information processing is as follows.
D.1采集图像的滤波D.1 Filtering the acquired image
由于人体手部的主要特征采集点集中在手部指部的顶端,指端的指纹特征也是具有唯一生物识别的特征,所以在采集到手部指部的特征点后,首先需要将非指端的信息采用算法的方法滤掉,整个算法的整体思路如下:Since the main feature collection point of the human hand is concentrated on the top of the hand finger, the fingertip fingerprint feature also has the unique biometric feature. Therefore, after collecting the feature points of the hand finger, the non-finger end information needs to be adopted first. The algorithm's method is filtered out. The overall idea of the whole algorithm is as follows:
D.1.1建立指端和第二指部的关节纹的库文件以及指部关节纹的特征库;D.1.1 establish a library file of the joint pattern of the finger end and the second finger and a feature library of the finger joint pattern;
D.1.2导入特征库针对指部采集到的信息进行特征识别;D.1.2 Importing the feature library to perform feature recognition on the information collected by the finger;
D.1.3特征识别后,针对特征区的区域进行计算,计算出指部指端的特征区的范围;D.1.3 After feature recognition, calculate the area of the feature area and calculate the range of the feature area of the fingertip;
D.1.4特征区和非指部特征区的图像分割;D.1.4 Image segmentation of the feature area and the non-finger feature area;
D.1.5非指部特征区的信息从原始图像剔除;D.1.5 The information of the non-finger feature area is removed from the original image;
D.1.6新特征区域的信息做进一步的滤波处理;D.1.6 The information of the new feature area is further filtered;
D.2采集图像的特征点提取D.2 feature point extraction of captured images
D.3采集图像的匹配和空间深度信息的计算D.3 Acquisition of image matching and calculation of spatial depth information
D.4特征点云数据的生成D.4 feature point cloud data generation
F.数据后续处理:通过对点云数据进行进一步的处理,可以得到手部的纹理构造。F. Data Subsequent Processing: By further processing the point cloud data, the texture structure of the hand can be obtained.
G.手部3D数据显示:手部3D数据通过可视化的方法显示在显示器上。G. Hand 3D data display: The 3D data of the hand is displayed on the display by a visual method.
这里,D.2、D.3以及D.4可以参见前文介绍,此处不再赘述。Here, D.2, D.3, and D.4 can be referred to the foregoing introduction, and will not be described again here.
优选的,在步骤S02中,存储步骤S01所采集到的生物特征3D数据,并以生物体的身份信息(I1、I2…In)作为识别标志对采集到的生物特征3D数据进行存储,形成包括多条生物特征3D数据(D1、D2…Dn)的数据库,例如:3D数据D1以该生物体的身份信息I1作为文件名进行存储,另一生物体的3D数据D2以该生物体的身份信息I2作为文件名进行存储,以此类推,形成包括n个生物体3D数据的数据库。Preferably, in step S02, the biometric 3D data collected in step S01 is stored, and the collected biometric 3D data is stored by using the identity information (I1, I2...In) of the living body as an identification mark to form the included A database of a plurality of biometric 3D data (D1, D2, . . . Dn), for example, 3D data D1 is stored with the identity information I1 of the organism as a file name, and 3D data D2 of another organism is identified by the identity information of the organism. It is stored as a file name, and so on, forming a database including n biometric 3D data.
其中,当采集对象即生物体为人体时,则身份信息I包括但不限于人的:姓名、性别、年龄和证件号中的一种或多种,证件号可以包括人在生活中经常用到的例如身份证号、护照号、驾照号、社保号或军官证号中的一种或多种。Wherein, when the object to be collected, that is, the living body is a human body, the identity information I includes but is not limited to one or more of a person: name, gender, age, and document number, and the document number may include a person who often uses in life. For example, one or more of an ID number, a passport number, a driver's license number, a social security number, or a military officer number.
优选的,在步骤S03对目标生物体的身份识别时,采用天目点云比对识别法对目标生物体(即待识别身份的生物体)的生物特征3D数据(T1、T2…Tn)和数据库中存储的生物特征3D数据(D1、D2…Dn)进行比对,以识别目标生物体的身份。首先,通过输入目标生物体的身份信息,如人体的身份证号,这样可以快速找到已经存储在数据库中以该身份证号为文件名的3D数据(D1、D2…Dn),而不必将目标人的数据与数据库中的海量数据进行逐一比对,提高了比对识别的效率,大大提升了身份识别的速度,然后再把当前采集到的该人体的3D数据(T1、T2…Tn)与数据中调取出来的3D数据进行比对,最后识别该人体的身份是否符合,进而实现身份认证,具体的,采用天目点云比对识别法包括如下步骤:Preferably, in the identification of the target organism in step S03, the biometric 3D data (T1, T2...Tn) and the database of the target organism (ie, the organism to be identified) are identified by the Tianmu point cloud alignment method. The biometric 3D data (D1, D2...Dn) stored in the comparison is compared to identify the identity of the target organism. First, by inputting the identity information of the target organism, such as the ID number of the human body, it is possible to quickly find the 3D data (D1, D2...Dn) already stored in the database with the ID number as the file name without having to target The human data is compared with the massive data in the database one by one, which improves the efficiency of the comparison recognition, greatly improves the speed of the identification, and then the currently collected 3D data (T1, T2...Tn) of the human body and The 3D data retrieved from the data is compared, and finally the identity of the human body is recognized to be in conformity, thereby implementing identity authentication. Specifically, the method for identifying the eye-point cloud comparison method includes the following steps:
S301.特征点拟合;S301. Feature point fitting;
S302.曲面整体最佳拟合;S302. The overall best fit of the surface;
S303.相似度计算。S303. Similarity calculation.
优选的,天目点云比对识别法还包括如下具体步骤:Preferably, the Tianmu point cloud comparison identification method further comprises the following specific steps:
采用基于空域直接匹配的方法进行特征点拟合,在两个点云的对应的刚性区域,选取三个及以上特征点作为拟合关键点,通过坐标变换,直接进行特征点对应匹配;The feature point fitting is performed by the method based on direct matching of spatial domain. Three or more feature points are selected as the matching key points in the corresponding rigid regions of the two point clouds, and the feature points are directly matched by coordinate transformation;
特征点对应匹配后,整体曲面最佳拟合后的点云的数据对齐;After the feature points are matched, the data of the point cloud after the best fit of the overall surface is aligned;
采用最小二乘法进行相似度计算。The least squares method is used to calculate the similarity.
天目点云比对识别法(Yare Eyes point cloud match recognition method)识别过程和工作原理如下:首先,点云是组成3D模型的基本元素,它包含空间坐标信息(XYZ)和颜色信息(RGB)。点云的属性包括空间分辨率,点位精度,表面法向量等。它的特征不受外界条件的影响,对于平移和旋转都不会发生改变。逆向软件能够进行点云的编辑和处理,如:imageware、geomagic、catia、copycad和rapidform等。天目点云比对识别法特有的基于空域直接匹配的方法包括:迭代最近点法ICP(Iterative closest point),ICP方法通常分为两步,第 一步特征点拟合,第二步曲面整体最佳拟合。先拟合对齐特征点的目的是为了最短时间找到并对齐要比对拟合的两个点云。但不限于此。例如可以是:The recognition process and working principle of the Yare Eyes point cloud match recognition method are as follows: First, the point cloud is a basic element constituting a 3D model, which includes spatial coordinate information (XYZ) and color information (RGB). Point cloud attributes include spatial resolution, point precision, surface normal vectors, and more. Its characteristics are not affected by external conditions and will not change for translation and rotation. Reverse software enables the editing and processing of point clouds such as imageware, geomagic, catia, copycad and rapidform. The method of direct matching based on spatial domain specific to the celestial point cloud comparison method includes: Iterative closest point method (ICP), the ICP method is usually divided into two steps, the first step is the feature point fitting, and the second step is the overall surface. Good fit. The purpose of fitting the feature points first is to find and align the two point clouds to fit the fit in the shortest time. But it is not limited to this. For example, it can be:
第一步,在两个点云的对应的刚性区域,选取三个及以上特征点作为拟合关键点,通过坐标变换,直接进行特征点对应匹配。In the first step, three or more feature points are selected as the matching key points in the corresponding rigid regions of the two point clouds, and the feature points are directly matched by coordinate transformation.
ICP用于曲线或曲面片段的配准,是3D数据重构过程中一个非常有效的工具,给定两个3D模型粗略的初始对齐条件,ICP迭代地寻求两者之间的刚性变换以最小化对齐误差,实现两者的空间几何关系的配准。ICP is used for registration of curves or surface segments, which is a very effective tool in 3D data reconstruction. Given the rough initial alignment conditions of two 3D models, ICP iteratively seeks rigid transformation between the two to minimize Alignment errors to achieve registration of spatial geometric relationships between the two.
给定集合
Figure PCTCN2019074455-appb-000001
Figure PCTCN2019074455-appb-000002
集合元素表示两个模型表面的坐标点,ICP配准技术迭代求解距离最近的对应点、建立变换矩阵,并对其中一个实施变换,直到达到某个收敛条件,迭代停止.其编码如下:
Given set
Figure PCTCN2019074455-appb-000001
with
Figure PCTCN2019074455-appb-000002
The set elements represent the coordinate points of the surface of the two models. The ICP registration technique iteratively solves the nearest points of the distance, establishes the transformation matrix, and implements transformation on one of them until the convergence condition is reached, and the iteration stops. The coding is as follows:
1.1 ICP算法1.1 ICP algorithm
输入.P 1,P 2. Enter .P 1 , P 2 .
输出.经变换后的P 2 Output. Transformed P 2
P 2(0)=P 2,l=0; P 2 (0)=P 2 , l=0;
DoDo
For P 2(l)中的每一个点
Figure PCTCN2019074455-appb-000003
Every point in For P 2 (l)
Figure PCTCN2019074455-appb-000003
在P 1中找一个最近的点y iFind a nearest point y i in P 1 ;
End ForEnd For
计算
Figure PCTCN2019074455-appb-000004
配准误差E;
Calculation
Figure PCTCN2019074455-appb-000004
Registration error E;
If E大于某一阈值If E is greater than a certain threshold
计算P 2(l)与Y(l)之间的变换矩阵T(l); Calculating a transformation matrix T(l) between P 2 (l) and Y(l);
P 2(l+1)=T(l)·P 2(l),l=l+1; P 2 (l+1)=T(l)·P 2 (l), l=l+1;
ElseElse
停止;stop;
End IfEnd If
While||P 2(l+l)-P 2(l)||>threshold; While||P 2 (l+l)-P 2 (l)||>threshold;
其中配准误差Registration error
Figure PCTCN2019074455-appb-000005
Figure PCTCN2019074455-appb-000005
1.2基于局部特征点的匹配:1.2 Matching based on local feature points:
以人面部信息识别为例,人脸模型主要分为刚性模型部分和塑性模型部分,塑性变形影响对齐的准确性,进而影响相似度。塑性模型第一次第二次采集数据会有局部差异,一种解决途径是只在刚性区域选取特征点,特征点是从一个对象中提取的、在一定条件下保持稳定不变的属性,采用常用的方法迭代最近点法ICP特征点进行拟合对齐。Taking human face information recognition as an example, the face model is mainly divided into a rigid model part and a plastic model part, and the plastic deformation affects the accuracy of the alignment, thereby affecting the similarity. The first time the plastic model acquires data, there will be local differences. One solution is to select feature points only in the rigid region. The feature points are attributes extracted from an object and remain stable under certain conditions. The commonly used method iterates the nearest point ICP feature points for fitting alignment.
首先提取脸部受表情影响较小的区域,如鼻子区域鼻尖、眼框外角、额头区域、颧骨区域、耳部区域等。人体手部指节为刚性区域,掌部为塑性区域,在指部区域选取特征点为最佳。虹膜为刚性模型。First, extract the area whose face is less affected by the expression, such as the tip of the nose, the outer corner of the eye, the forehead, the tibia, and the ear. The human hand knuckles are rigid areas, the palm part is a plastic area, and the feature points are selected optimally in the finger area. The iris is a rigid model.
对特征点的要求:Requirements for feature points:
1)完备性.蕴含尽可能多的对象信息,使之区别于其他类别的对象;2)紧凑性.表达所需的数据量尽可能少;3)还要求特征最好能在模型旋转、平移、镜像变换下保持不变。1) Completeness. Contains as much object information as possible to distinguish it from other categories of objects; 2) Compactness: Minimize the amount of data required for expression; 3) Also require features to be rotated and translated in the model. Under the mirror transformation, it remains unchanged.
在3D生物特征识别中,采用对齐两个3D生物特征模型点云,计算输入模型的相似度,其中配准误差作为差别度量。In 3D biometric recognition, the similarity of the input model is calculated by aligning two 3D biometric model point clouds, wherein the registration error is used as a difference metric.
第二步:特征点最佳拟合后,整体曲面最佳拟合后的点云的数据对齐。Step 2: After the feature points are best fitted, the data of the point cloud after the best fit of the overall surface is aligned.
第三步,相似度计算。最小二乘法The third step is the similarity calculation. Least squares
最小二乘法(又称最小平方法)是一种数学优化技术。它通过最小化误差的平方和寻找数据的最佳函数匹配。利用最小二乘法可以简便地求得未知的数据,并使得这些求得的数据与实际数据之间误差的平方和为最小。最小二乘法也可用于曲线拟合。其他一些优化问题也可通过最小化能量或最大化熵,用最小二乘法来表达。常用于解决曲线拟合问题,进而解决曲面的完全拟合。通过迭代算法能够加快数据收敛,快速求得最优解。The least squares method (also known as the least squares method) is a mathematical optimization technique. It finds the best function match for the data by minimizing the sum of the squares of the errors. The least squares method can be used to easily obtain unknown data and minimize the sum of the squares of the errors between the obtained data and the actual data. The least squares method can also be used for curve fitting. Other optimization problems can also be expressed by least squares by minimizing energy or maximizing entropy. It is often used to solve the curve fitting problem and solve the complete fitting of the surface. The iterative algorithm can speed up data convergence and quickly find the optimal solution.
如果3D数据模型是以STL文件格式输入的,则通过计算点云与三角片的距离来确定其偏差。因此,该方法需要对每个三角面片建立平面方程,其偏差为点到平面的距离。而对于3D数据模型为IGES或STEP模型,由于自由曲面表达形式为NURBS面,所以点到面的距离计算需要用到数值优化的方法进行计算。通过迭代计算点云中各点至NURBS曲面的最小距离来表达偏差,或将NURBS曲面进行指定尺度离散,用点与点的距离近似表达点偏差,或将其转换为STL格式进行偏差计算。不同的坐标对齐及偏差计算方法,获得的检测结果也不同。对齐误差的大小将直接影响检测精度及评估报告的可信度。If the 3D data model is input in the STL file format, the deviation is determined by calculating the distance between the point cloud and the triangle. Therefore, the method requires a plane equation for each triangular patch whose deviation is the distance from the point to the plane. For the 3D data model is IGES or STEP model, since the free-form surface expression is NURBS surface, the point-to-surface distance calculation needs to be calculated by numerical optimization method. The deviation is expressed by iteratively calculating the minimum distance from each point in the point cloud to the NURBS surface, or the NURBS surface is discretized at a specified scale, and the point deviation is approximated by the point-to-point distance, or converted into an STL format for deviation calculation. Different coordinate alignment and deviation calculation methods have different detection results. The size of the alignment error will directly affect the accuracy of the detection and the credibility of the evaluation report.
最佳拟合对齐是检测偏差平均到整体,以保证整体偏差最小为条件来终止迭代计算的对齐过程,对配准结果进行3D分析,生成结果对象以两个图形间误差的均方根的形式输出,均方根越大,反映两个模型在该处的差异越大。反之亦反。根据比对重合度比例判断是否是比对标的物。The best fit alignment is to detect the deviation average to the whole, to ensure that the overall deviation is the minimum condition to terminate the alignment process of the iterative calculation, perform 3D analysis on the registration result, and generate the result object in the form of the root mean square of the error between the two figures. The output, the larger the root mean square, the greater the difference between the two models. The opposite is also true. According to the ratio of the coincidence degree, it is judged whether it is the target of the comparison.
本发明还提供一种基于可见光拍照的生物特征3D数据识别系统,其包括如下装置:The invention also provides a biometric 3D data recognition system based on visible light photographing, which comprises the following devices:
生物特征信息采集装置,用于采集生物体的多幅生物特征图像,并根据多幅生物特征图像构建生物特征的3D模型,以实现生物体的生物特征3D数据采集;The biometric information collecting device is configured to collect a plurality of biometric images of the living body, and construct a 3D model of the biometrics according to the plurality of biometric images to realize biometric 3D data collection of the living body;
生物特征3D数据存储装置,用于以生物体的身份信息(I1、I2…In)作为识别标志对采集到的生物特征3D数据进行存储,形成包括多条生物特征3D数 据(D1、D2…Dn)的数据库;The biometric 3D data storage device is configured to store the collected biometric 3D data by using the identity information (I1, I2...In) of the living body as an identification mark to form a plurality of biometric 3D data (D1, D2...Dn). Database)
目标生物体的身份识别装置,用于采集目标生物体的生物特征3D数据(T1、T2…Tn),并利用目标生物体的身份信息(I1、I2…In)找到数据库中存储的生物特征3D数据(D1、D2…Dn),用于将目标生物体的生物特征3D数据(T1、T2…Tn)分别与相应的数据库中存储的生物特征3D数据(D1、D2…Dn)进行比对,以识别目标生物体的身份。An identification device for the target organism for collecting biometric 3D data (T1, T2...Tn) of the target organism, and using the identity information (I1, I2...In) of the target organism to find the biometric 3D stored in the database Data (D1, D2...Dn) for comparing the biometric 3D data (T1, T2...Tn) of the target organism with the biometric 3D data (D1, D2...Dn) stored in the corresponding database, To identify the identity of the target organism.
优选的,生物特征信息采集装置包括:Preferably, the biometric information collecting device comprises:
图像采集单元,用于利用多台相机组成的相机矩阵对生物特征信息进行采集,得到多幅生物特征图像;An image acquisition unit is configured to collect biometric information by using a camera matrix composed of a plurality of cameras to obtain a plurality of biometric images;
特征点提取单元,用于对多幅生物特征图像进行处理,提取多幅生物特征图像中各自的特征点;a feature point extracting unit, configured to process a plurality of biometric images to extract respective feature points in the plurality of biometric images;
点云生成单元,用于基于提取的多幅生物特征图像中各自的特征点,生成生物特征的特征点云数据;a point cloud generating unit, configured to generate feature point cloud data of the biometric based on the respective feature points in the extracted plurality of biometric images;
3D模型构建单元,用于根据特征点云数据构建生物特征的3D模型,以实现生物特征3D数据的采集。The 3D model building unit is configured to construct a 3D model of the biometric feature according to the feature point cloud data to implement the collection of the biometric 3D data.
需要说明的是,本发明实施例中的生物特征并不限于上述的头部、面部和/或虹膜以及手部,还可以包括其他生物特征,如脚部等,本发明实施例对此不做限制。It should be noted that the biometric features in the embodiments of the present invention are not limited to the above-mentioned head, face, and/or iris and hand, and may include other biological features, such as the foot, etc. limit.
下面通过具体的实施例对本发明实施例提供的基于可见光拍照的生物特征3D数据识别方法及系统做进一步说明。The biometric 3D data recognition method and system based on visible light photographing provided by the embodiments of the present invention are further described below by using specific embodiments.
本发明中的生物特征3D数据采集识别系统可以与采集系统采用一套系统,也可以分别采用两套系统。The biometric 3D data acquisition and recognition system of the present invention can adopt a system with the acquisition system, or can adopt two sets of systems respectively.
本发明的一个实施例中,头部信息、面部信息和/或虹膜信息的3D数据识别系统如图3所示,该系统可以包括:In one embodiment of the present invention, a 3D data identification system for header information, face information, and/or iris information is shown in FIG. 3, and the system may include:
底座31,作为整个发明设备的主要的底部支撑结构;The base 31 serves as a main bottom support structure for the entire inventive device;
座椅32,固定拍照人体位置和调节人体高度;The seat 32 fixes the position of the human body and adjusts the height of the human body;
支撑结构33,连接设备的底部和其他主体机构; Support structure 33, connecting the bottom of the device and other body mechanisms;
显示器34,设备系统工作的操作界面; Display 34, an operation interface for the operation of the device system;
承载结构35,相机、中央处理器、灯光的固定结构;Carrying structure 35, fixed structure of camera, central processing unit and light;
相机矩阵36,人体头部信息、面部信息和/或虹膜信息的3D数据采集; Camera matrix 36, 3D data acquisition of human head information, facial information and/or iris information;
带状补光灯37,环境灯光补充使用。Strip fill light 37, ambient light is used in addition.
设备的连接关系说明Device connection description
底座31通过连接结构和座椅32相连;The base 31 is connected to the seat 32 through a connecting structure;
底座31通过机构连接结构和支撑结构33相连;The base 31 is connected to the support structure 33 through a mechanism connection structure;
支撑结构33通过机械连接结构和承载结构35相连;The support structure 33 is connected to the support structure 35 by a mechanical connection structure;
显示器34通过机械固定在承载结构35上;The display 34 is mechanically fixed to the carrying structure 35;
相机矩阵36通过结构固定的方式固定在承载结构35上;The camera matrix 36 is fixed on the carrying structure 35 by structural fixing;
带状补光灯37通过结构固定的方式固定在承载结构35上。The band fill lamp 37 is fixed to the carrier structure 35 by means of a structural fixing.
(1-2)承载结构35的内部模块组成如下。(1-2) The internal modules of the load-bearing structure 35 are composed as follows.
如图4所示,承载结构35的内部模块可以由如下几部分组成:As shown in FIG. 4, the internal module of the load bearing structure 35 can be composed of the following parts:
电源管理模块,负责提供整个系统的所需的各种电源;A power management module that is responsible for providing the various power supplies required for the entire system;
灯光管理模块,通过中央处理模块可以调整灯光的亮度;The light management module can adjust the brightness of the light through the central processing module;
串口集成模块,负责中央处理模块和相机矩阵的双向通讯;The serial port integration module is responsible for two-way communication between the central processing module and the camera matrix;
中央处理模块,负责系统信息处理、显示、灯光、座椅的控制;Central processing module, responsible for system information processing, display, lighting, seat control;
中央处理模块中还包括识别模块,其用于目标生物体的身份识别,首先将目标生物体的生物特征3D数据(T1、T2…Tn)分别与相应的数据库中存储的生物特征3D数据(D1、D2…Dn)进行比对,再采用天目点云比对识别法识别目标生物体的身份;The central processing module further includes an identification module for identifying the target organism, firstly selecting the biometric 3D data (T1, T2...Tn) of the target organism and the biometric 3D data stored in the corresponding database (D1). , D2...Dn) to perform the comparison, and then use the Tianmu point cloud comparison method to identify the identity of the target organism;
座椅升降管理模块,负责座椅高度调整;Seat lift management module, responsible for seat height adjustment;
显示驱动管理模块,负责显示器的显示驱动。The display driver management module is responsible for the display driver of the display.
承载结构35的内部模块以及外部的连接关系如下:The internal modules of the load bearing structure 35 and the external connection relationships are as follows:
1)电源管理模块向相机矩阵、串口集成模块、灯光管理模块、中央处理模块、显示驱动管理模块、座椅升降管理模块提供电源;1) The power management module provides power to the camera matrix, the serial port integration module, the light management module, the central processing module, the display drive management module, and the seat lift management module;
2)串口集成模块连接相机矩阵和中央处理模块,实现它们之间的双向通讯,如图5所示;2) The serial port integration module connects the camera matrix and the central processing module to realize two-way communication between them, as shown in FIG. 5;
2.1)相机以单独个体的方式以串口的方式和串口集成模块连接2.1) The camera is connected to the serial port integration module in a single serial manner.
2.2)串口集成模块通过USB接口和中央处理模块连接2.2) Serial port integration module is connected to the central processing module via USB interface
2.3)中央处理模块通过定制开发的软件界面实现和相机矩阵的可视化操作2.3) The central processing module realizes the visualization operation of the camera matrix through the customized development software interface
2.4)操作界面上可以实现对相机拍照参数的设置2.4) The camera interface parameters can be set on the operation interface.
感光度ISO(范围50~6400)Sensitivity ISO (range 50 ~ 6400)
快门速度(1/4000~1/2)(秒)Shutter speed (1/4000 to 1/2) (seconds)
变焦倍数(1~3.8x)Zoom factor (1 to 3.8x)
光圈(大/小)Aperture (large/small)
2.5)操作界面可以实现对相机开机的初始化操作2.5) The operation interface can realize the initialization operation of turning on the camera.
2.6)操作界面可以实现相机影像采集的命令2.6) Operation interface can realize the command of camera image acquisition
2.7)操作界面可以实现相机影像储存路径的设置2.7) The operation interface can realize the setting of camera image storage path
2.8)操作界面可以实现相机实时影像的浏览以及相机的切换2.8) Operation interface can realize real-time camera browsing and camera switching
3)灯光管理模块连接电源管理模块、中央处理模块以及外部带状补光灯;3) The light management module is connected to the power management module, the central processing module, and the external band fill light;
4)座椅升降管理模块连接电源管理模块、中央处理模块以及外部座椅,中央处理模块通过可视化界面实现对座椅高度的上下调节;4) The seat lift management module is connected to the power management module, the central processing module and the external seat, and the central processing module realizes the up and down adjustment of the seat height through the visual interface;
5)显示驱动管理模块连接电源管理模块、中央处理模块及外部的显示器;5) The display driver management module is connected to the power management module, the central processing module, and an external display;
6)中央处理模块连接电源管理模块、灯光管理模块、座椅升降管理模块、串口集成模块、显示驱动管理模块。6) The central processing module is connected to the power management module, the light management module, the seat lift management module, the serial port integration module, and the display drive management module.
设备使用方法如下The device is used as follows
A.启动设备:打开电源开关后,中央处理器,相机矩阵,带状补光灯分别启动。A. Start the device: After turning on the power switch, the central processing unit, camera matrix, and band fill light are activated separately.
B.参数设定:通过显示器界面,可以设定相机矩阵拍照的各项参数。B. Parameter setting: Through the display interface, various parameters of the camera matrix can be set.
C.信息采集:参数设定完毕后,启动矩阵相机开始对人体头部面部进行信息采集,信息采集时间在0.8秒内完成,采集的信号最后以数字图像(.jpg)的格式传至中央处理模块进行处理,中央处理模块核心由以下几个部分组成:C. Information collection: After the parameters are set, the startup matrix camera starts to collect information on the human head and face. The information collection time is completed within 0.8 seconds, and the collected signals are finally transmitted to the central processing in the format of digital image (.jpg). The module is processed, and the core of the central processing module consists of the following parts:
C.1 CPU(Central Processing Unit,中央处理单元):负责整个数字信号的传送调度,任务分配,内存管理,以及部分单一的计算处理;C.1 CPU (Central Processing Unit): responsible for the transmission scheduling of the entire digital signal, task allocation, memory management, and some single calculation processing;
C.2 GPU(Graphics Processing Unit,图像处理单元):选用特殊型号的GPU,具有优秀的图像处理能力和高效的计算能力;C.2 GPU (Graphics Processing Unit): Selects a special model GPU with excellent image processing capabilities and efficient computing capabilities.
C.3 DRAM(Dynamic Random Access Memory,即动态随机存取存储器):作为整个数字信号处理的暂时存储中心,需要匹配CPU和GPU的运算能力,得到最佳的处理和计算效能。C.3 DRAM (Dynamic Random Access Memory): As a temporary storage center for digital signal processing, it needs to match the computing power of CPU and GPU to get the best processing and computing performance.
D.信息处理:矩阵相机采集完的信号传送到中央处理模块进行信号处理。D. Information Processing: The signal collected by the matrix camera is transmitted to the central processing module for signal processing.
D.1信息处理的过程如下D.1 information processing process is as follows
D.1.1采集图像的滤波D.1.1 Filtering the acquired image
利用GPU的特性,结合图像滤波的矩阵运算子的特性,图像滤波可以在一定算法的支持下,快速完成。Using the characteristics of the GPU, combined with the characteristics of the matrix operator of image filtering, image filtering can be completed quickly with the support of certain algorithms.
D.1.2采集图像的特征点提取D.1.2 Feature point extraction of acquired images
采用CPU和与整体性能相匹配的GPU,因为本设备的各种信息的格式都是图像格式,结合具有优秀图像处理能力的GPU,可以将jpg的各种信息内容均匀的分配到GPU的block中,由于本设备采用双GPU,每颗GPU本身具有 56个block,所以采集信息抓取到的18张jpg的图像会均匀的分配到112个block上面进行运算,并结合CPU的集中调度和分配功能,可以快速地计算出每张照片具有的特征点,相对于单独CPU或者CPU搭配其他普通型号的GPU的运算,整体的运算速度时间是后者的1/10或者更短。Using CPU and GPU matching the overall performance, because the format of various information of this device is image format, combined with GPU with excellent image processing capability, the information content of jpg can be evenly distributed to the block of GPU. Since the device uses dual GPUs, each GPU has 56 blocks, so the 18 jpg images captured by the acquisition information are evenly distributed to 112 blocks for calculation, combined with the centralized scheduling and allocation functions of the CPU. It can quickly calculate the feature points of each photo. Compared with the operation of a separate CPU or CPU with other common models of GPUs, the overall operation speed time is 1/10 or less of the latter.
D.1.3采集图像的匹配和空间深度信息的计算D.1.3 Acquisition of image matching and calculation of spatial depth information
图像特征点的提取采用金字塔的层级结构,以及空间尺度不变性的特殊算法,这两种特殊的算法都是结合本设备选用的GPU的特殊构造,最大程度的发挥系统的计算性能,实现快速提取图像信息中的特征点。Image feature points are extracted using the hierarchical structure of the pyramid and the special algorithm of spatial scale invariance. These two special algorithms are combined with the special structure of the GPU selected by the device to maximize the computing performance of the system and achieve fast extraction. Feature points in image information.
此过程的特征描述子采用SIFT特征描述子,SIFT特征描述子具有128个特征描述向量,可以在方向和尺度上描述任何特征点的128个方面的特征,显著提高对特征描述的精度,同时特征描述子具有空间上的独立性。The feature descriptor of this process uses SIFT feature descriptor. The SIFT feature descriptor has 128 feature description vectors, which can describe the 128 features of any feature point in direction and scale, and significantly improve the accuracy of feature description. The descriptor has spatial independence.
本设备采用的特殊图像处理GPU,具有优异的单独向量的计算和处理能力,对于采用128个特殊描述子的SIFT特征向量来讲,在这样特殊GPU的条件下来处理是最适合不过了,可以充分发挥该GPU的特殊计算能力,比较采用普通CPU或者CPU搭配其他普通规格的GPU,特征点的匹配时间会降低70%。The special image processing GPU used in this device has excellent calculation and processing capabilities of individual vectors. For SIFT feature vectors with 128 special descriptors, it is most suitable for processing under the conditions of such special GPUs. To take advantage of the special computing power of the GPU, compare the normal CPU or CPU with other common GPUs, and the matching time of feature points will be reduced by 70%.
特征点匹配完毕,系统会采用光束平差法的算法计算出相机相对于头部面部在空间上的相对位置,根据此相对位置的空间坐标,GPU可以快速地计算出头部面部特征点的深度信息。After the feature points are matched, the system uses the algorithm of the beam adjustment method to calculate the relative position of the camera relative to the head face. According to the spatial coordinates of the relative position, the GPU can quickly calculate the depth of the feature points of the head face. information.
D.1.4特征点云数据的生成D.1.4 Generation of feature point cloud data
根据D.1.3计算出头部面部特征点在空间的深度信息,由于GPU具有的向量计算能力,可以快速地匹配出头部面部特征点云的空间位置和颜色信息,形成一个标准的模型建立需要的点云信息。According to D.1.3, the depth information of the head and face feature points in space is calculated. Due to the vector computing capability of the GPU, the spatial position and color information of the head face feature point cloud can be quickly matched to form a standard model establishment requirement. Point cloud information.
E.特征尺寸标定:通过特征点云尺寸的标准,为整个模型的尺寸设定最初的参考尺寸。E. Feature Size Calibration: The initial reference size is set for the size of the entire model by the criteria of the feature point cloud size.
通过在信息采集上的特殊标定,该特殊标定具有空间确定尺寸,由于头部面部特征点云具有空间上尺度一致性,通过该特殊标定的确定尺寸,头部面部的任何特征点之间的尺寸可以从点云的空间位置坐标计算得到。Through special calibration on the information collection, the special calibration has a spatially determined size, and since the head facial feature point cloud has spatially uniform dimensionality, the size between any feature points of the head face is determined by the special calibration of the size. It can be calculated from the spatial position coordinates of the point cloud.
F.数据的后续处理:基于E中标定的尺寸,通过对点云数据进行进一步的处理,可以得到人脸头部面部或虹膜的3D数据。F. Subsequent processing of data: Based on the calibrated size in E, 3D data of the face and face of the face or iris can be obtained by further processing the point cloud data.
3D数据的格式有如下几个文件:The format of 3D data has the following files:
.obj——描述3D模型的空间形状特征.obj - describes the spatial shape characteristics of 3D models
.jpg——描述3D模型的表面纹理特征.jpg - Describe the surface texture features of 3D models
.mtl——描述3D模型的表面材质和灯光特征.mtl - describes the surface material and lighting characteristics of the 3D model
G.头部面部3D数据通过可视化的方法显示在显示器上。G. Head face 3D data is displayed on the display by a visual method.
中央处理模块中的识别模块根据目标生物体的身份信息(I1、I2…In)找到所述数据库中存储的生物特征3D数据(D1、D2…Dn),并将所述目标生物体的生物特征3D数据(T1、T2…Tn)分别与相应的所述数据库中存储的生物特征3D数据(D1、D2…Dn)进行比对,以识别目标生物体的身份,并将识别结果输出显示在显示器上。The identification module in the central processing module finds the biometric 3D data (D1, D2...Dn) stored in the database according to the identity information (I1, I2...In) of the target organism, and biometrics of the target organism The 3D data (T1, T2...Tn) are respectively compared with the biometric 3D data (D1, D2...Dn) stored in the corresponding database to identify the identity of the target organism, and the recognition result output is displayed on the display. on.
基于上文各个实施例提供的基于可见光拍照的生物特征3D数据识别方法,基于同一发明构思,本发明实施例还提供了一种基于可见光相机的生物特征3D数据采集装置。Based on the visible light photographing based biometric 3D data identification method provided by the above embodiments, the embodiment of the present invention further provides a visible light camera based biometric 3D data collecting device.
当然在本发明的另一实施例中,也可以不包括座椅32,如图6所示,其包括61支撑座,62识别装置,63控制及显示装置,64相机矩阵,65弧形承载机构,66弧形补光灯,采集识别时,人体站在装置围成的U形区域内。Of course, in another embodiment of the present invention, the seat 32 may not be included. As shown in FIG. 6, it includes 61 support bases, 62 identification devices, 63 control and display devices, 64 camera matrices, and 65 arc-shaped load-bearing mechanisms. , 66 arc-shaped fill light, when collecting and identifying, the human body stands in the U-shaped area enclosed by the device.
图7示出了根据本发明一实施例的基于可见光拍照的生物特征3D数据采集装置的结构示意图。如图7所示,该装置可以包括图像采集单元910、特征点提取单元920、点云生成单元930以及3D模型构建单元940。FIG. 7 is a block diagram showing the structure of a biometric 3D data acquisition device based on visible light photographing according to an embodiment of the invention. As shown in FIG. 7, the apparatus may include an image acquisition unit 910, a feature point extraction unit 920, a point cloud generation unit 930, and a 3D model construction unit 940.
图像采集单元910,用于利用多台可见光相机组成的相机矩阵对生物特征信息进行采集,得到多幅生物特征图像;The image acquisition unit 910 is configured to collect biometric information by using a camera matrix composed of multiple visible light cameras to obtain a plurality of biometric images.
特征点提取单元920,与图像采集单元910相耦合,用于对多幅生物特征图像进行处理,提取多幅生物特征图像中各自的特征点;The feature point extraction unit 920 is coupled to the image acquisition unit 910 for processing a plurality of biometric images to extract respective feature points in the plurality of biometric images;
点云生成单元930,与特征点提取单元920相耦合,用于基于提取的多幅生物特征图像中各自的特征点,生成生物特征的特征点云数据;The point cloud generating unit 930 is coupled to the feature point extracting unit 920, and configured to generate feature point cloud data of the biometric feature based on the respective feature points in the extracted plurality of biometric images;
3D模型构建单元940,与点云生成单元930相耦合,用于根据特征点云数据构建生物特征的3D模型,以实现生物特征3D数据的采集。The 3D model building unit 940 is coupled to the point cloud generating unit 930 for constructing a 3D model of the biometric according to the feature point cloud data to implement biometric 3D data collection.
在本发明的可选实施例中,上述点云生成单元930还用于:In an optional embodiment of the present invention, the point cloud generating unit 930 is further configured to:
根据提取的多幅生物特征图像中各自的特征点的特征,进行特征点的匹配,建立匹配的特征点数据集;Performing matching of feature points according to characteristics of respective feature points in the extracted plurality of biometric images, and establishing a matching feature point data set;
根据多台可见光相机的光学信息,计算各台相机相对于生物特征在空间上的相对位置,并根据相对位置计算出多幅生物特征图像中的特征点的空间深度信息;Calculating spatial relative position of each camera relative to the biometric according to optical information of the plurality of visible light cameras, and calculating spatial depth information of the feature points in the plurality of biometric images according to the relative positions;
根据匹配的特征点数据集和特征点的空间深度信息,生成生物特征的特征点云数据。The feature point cloud data of the biometric is generated according to the matched feature point data set and the spatial depth information of the feature point.
在本发明的可选实施例中,多幅生物特征图像中各自的特征点的特征采用尺度不变特征转换SIFT特征描述子来描述。In an alternative embodiment of the invention, the features of the respective feature points in the plurality of biometric images are described using scale invariant feature transform SIFT feature descriptors.
在本发明的可选实施例中,上述点云生成单元930还用于:In an optional embodiment of the present invention, the point cloud generating unit 930 is further configured to:
根据多台可见光相机的光学信息,采用光束平差法计算各台相机相对于生物特征在空间上的相对位置。According to the optical information of multiple visible light cameras, the relative position of each camera relative to the biological features in space is calculated by the beam adjustment method.
在本发明的可选实施例中,多幅生物特征图像中的特征点的空间深度信息包括:空间位置信息和颜色信息。In an optional embodiment of the invention, the spatial depth information of the feature points in the plurality of biometric images comprises: spatial location information and color information.
在本发明的可选实施例中,上述3D模型构建单元940还用于:In an optional embodiment of the present invention, the foregoing 3D model building unit 940 is further configured to:
设定待构建的3D模型的参考尺寸;Setting a reference size of the 3D model to be constructed;
根据参考尺寸和特征点云数据的空间位置信息,确定特征点云数据中各个特征点的空间尺寸,从而构建生物特征的3D模型。According to the spatial size information of the reference size and the feature point cloud data, the spatial size of each feature point in the feature point cloud data is determined, thereby constructing a 3D model of the biometric.
在本发明的可选实施例中,生物特征的3D模型中包括下列至少之一的3D数据:In an alternative embodiment of the invention, the 3D model of the biometric includes at least one of the following 3D data:
描述3D模型的空间形状特征数据;Describe the spatial shape feature data of the 3D model;
描述3D模型的表面纹理特征数据;Describe the surface texture feature data of the 3D model;
描述3D模型的表面材质和灯光特征数据。Describe the surface material and lighting feature data of the 3D model.
在本发明的可选实施例中,如图8所示,上文图7展示的装置还可以包括:In an alternative embodiment of the present invention, as shown in FIG. 8, the apparatus shown in FIG. 7 above may further include:
相机矩阵布局单元1010,与图像采集单元910相耦合,用于在图像采集单元910利用多台可见光相机组成的相机矩阵对生物特征信息进行采集之前,通过以下方式布局多台可见光相机:The camera matrix layout unit 1010 is coupled to the image acquisition unit 910 for arranging multiple visible light cameras by the image acquisition unit 910 before acquiring the biometric information using a camera matrix composed of multiple visible light cameras:
搭建支撑结构,在支撑结构上设置弧形承载结构;Constructing a support structure and providing an arc-shaped load-bearing structure on the support structure;
将多台可见光相机布置在弧形承载结构上。A plurality of visible light cameras are arranged on the curved load bearing structure.
在该实施例中,多台相机布置在弧形承载结构上形成相机矩阵。In this embodiment, a plurality of cameras are arranged to form a camera matrix on the curved load bearing structure.
在本发明的可选实施例中,若生物特征信息为头部面部信息,上述图像采集单元910还用于:In an optional embodiment of the present invention, if the biometric information is head and face information, the image capturing unit 910 is further configured to:
搭建与支撑结构连接的底座,在底座上设置用于固定生物拍照位置的座椅;Build a base connected to the support structure, and set a seat on the base for fixing the biological photographing position;
当生物位于座椅上时,利用布置在弧形承载结构上的多台可见光相机组成的相机矩阵对头部面部信息进行采集。When the creature is on the seat, the head face information is acquired using a camera matrix composed of a plurality of visible light cameras disposed on the curved load bearing structure.
在本发明的可选实施例中,如图8所示,上文图7展示的装置还可以包括:In an alternative embodiment of the present invention, as shown in FIG. 8, the apparatus shown in FIG. 7 above may further include:
第一显示单元1020,与3D模型构建单元940相耦合,用于在弧形承载结构上设置显示器;在构建得到头部面部的3D模型后,在显示器上通过可视化方式显示头部面部3D数据。The first display unit 1020 is coupled to the 3D model building unit 940 for setting a display on the curved carrying structure; after constructing the 3D model of the head face, visually displaying the head face 3D data on the display.
在本发明的可选实施例中,上述图像采集单元910还用于:In an optional embodiment of the present invention, the image collecting unit 910 is further configured to:
在利用多台可见光相机组成的相机矩阵对头部面部信息进行采集之前,通过显示器界面,设定各台相机的拍照参数。Before the head and face information is collected by using a camera matrix composed of a plurality of visible light cameras, the camera parameters of each camera are set through the display interface.
本发明实施例提供了一种基于可见光拍照的生物特征3D数据识别方法和系统,在方法中具体是利用多台可见光相机组成的相机矩阵对生物特征信息进行采集,得到多幅生物特征图像;进而对多幅生物特征图像进行处理,提取多幅生物特征图像中各自的特征点;随后,基于提取的多幅生物特征图像中各自的特征点,生成生物特征的特征点云数据;之后,根据特征点云数据构建生物特征的3D模型,以实现生物特征3D数据的采集。可以看到,本发明实施例采用多台可见光相机控制技术进行生物特征信息的采集,可以显著提高生物特征信息的采集效率;并且,本发明实施例利用采集到生物特征在空间上的特征信息,完整地复原生物特征在空间上的各项特征,为后续的生物特征数据的应用提供了无限的可能性。The embodiment of the invention provides a biometric 3D data recognition method and system based on visible light photographing. In the method, the biometric information is collected by using a camera matrix composed of multiple visible light cameras to obtain a plurality of biometric images; Processing a plurality of biometric images to extract respective feature points in the plurality of biometric images; and subsequently generating feature point cloud data of the biometrics based on the respective feature points in the extracted plurality of biometric images; Point cloud data constructs a 3D model of biometrics to enable acquisition of biometric 3D data. It can be seen that the embodiment of the present invention uses multiple visible light camera control technologies to collect biometric information, which can significantly improve the collection efficiency of biometric information. Moreover, the embodiment of the present invention utilizes the feature information of the biometrics collected in space. The complete restoration of the spatial characteristics of biometrics provides unlimited possibilities for the subsequent application of biometric data.
进一步,本发明实施例基于中央处理器和图形处理器的并行计算,可以快速高效地实现特征信息的处理以及点云的生成。并且,采用尺度不变特征转换SIFT特征描述子结合特殊图形处理器的并行计算能力,可以快速实现特征点的匹配和空间特征点云的生成。此外,采用独特的尺寸标定方法,可以准确快速地提取生物特征任何特征点的空间尺寸,生成生物特征的3D模型,以实现3D数据的采集。Further, the embodiment of the present invention can realize the processing of the feature information and the generation of the point cloud quickly and efficiently based on the parallel computing of the central processing unit and the graphics processor. Moreover, using the scale-invariant feature transform SIFT feature descriptor combined with the parallel computing power of the special graphics processor, the matching of feature points and the generation of spatial feature point clouds can be quickly realized. In addition, the unique size calibration method can accurately and quickly extract the spatial size of any feature points of biometrics, and generate 3D models of biometrics to achieve 3D data acquisition.
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。In the description provided herein, numerous specific details are set forth. However, it is understood that the embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures, and techniques are not shown in detail so as not to obscure the understanding of the description.
类似地,应当理解,为了精简本公开并帮助理解各个发明方面中的一个或多个,在上面对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如下面的权利要求书所反映的那样,发明方面在于少于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。Similarly, the various features of the invention are sometimes grouped together into a single embodiment, in the above description of the exemplary embodiments of the invention, Figure, or a description of it. However, the method disclosed is not to be interpreted as reflecting the intention that the claimed invention requires more features than those recited in the claims. Rather, as the following claims reflect, inventive aspects reside in less than all features of the single embodiments disclosed herein. Therefore, the claims following the specific embodiments are hereby explicitly incorporated into the embodiments, and each of the claims as a separate embodiment of the invention.
本领域那些技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实 施例中的模块或单元或组件组合成一个模块或单元或组件,以及此外可以把它们分成多个子模块或子单元或子组件。除了这样的特征和/或过程或者单元中的至少一些是相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。Those skilled in the art will appreciate that the modules in the devices of the embodiments can be adaptively changed and placed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and further they may be divided into a plurality of sub-modules or sub-units or sub-components. In addition to such features and/or at least some of the processes or units being mutually exclusive, any combination of the features disclosed in the specification, including the accompanying claims, the abstract and the drawings, and any methods so disclosed, or All processes or units of the device are combined. Each feature disclosed in this specification (including the accompanying claims, the abstract and the drawings) may be replaced by alternative features that provide the same, equivalent or similar purpose.
此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。In addition, those skilled in the art will appreciate that, although some embodiments described herein include certain features that are included in other embodiments and not in other features, combinations of features of different embodiments are intended to be within the scope of the present invention. Different embodiments are formed and formed. For example, in the claims, any one of the claimed embodiments can be used in any combination.
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例的基于可见光相机的生物特征3D数据采集装置中的一些或者全部部件的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本发明的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。The various component embodiments of the present invention may be implemented in hardware, or in a software module running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or digital signal processor (DSP) may be used in practice to implement some of some or all of the components of a visible light camera based biometric 3D data acquisition device in accordance with embodiments of the present invention. Or all features. The invention can also be implemented as a device or device program (e.g., a computer program and a computer program product) for performing some or all of the methods described herein. Such a program implementing the invention may be stored on a computer readable medium or may be in the form of one or more signals. Such signals may be downloaded from an Internet website, provided on a carrier signal, or provided in any other form.
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。It is to be noted that the above-described embodiments are illustrative of the invention and are not intended to be limiting, and that the invention may be devised without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as a limitation. The word "comprising" does not exclude the presence of the elements or steps that are not recited in the claims. The word "a" or "an" The invention can be implemented by means of hardware comprising several distinct elements and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means can be embodied by the same hardware item. The use of the words first, second, and third does not indicate any order. These words can be interpreted as names.
至此,本领域技术人员应认识到,虽然本文已详尽示出和描述了本发明的多个示例性实施例,但是,在不脱离本发明精神和范围的情况下,仍可根据本发明公开的内容直接确定或推导出符合本发明原理的许多其他变型或修改。因 此,本发明的范围应被理解和认定为覆盖了所有这些其他变型或修改。In this regard, it will be appreciated by those skilled in the <RTIgt;the</RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; The content directly determines or derives many other variations or modifications consistent with the principles of the invention. Therefore, the scope of the invention should be understood and construed as covering all such other modifications or modifications.

Claims (13)

  1. 一种生物特征3D数据采集方法,其特征在于,包括:A biometric 3D data acquisition method, comprising:
    A.启动设备:打开电源开关后,启动电源管理模块给系统各个模块提供电源,并同时启动相机矩阵、中央控制模块、无影灯光系统以及显示模块;A. Start device: After turning on the power switch, start the power management module to supply power to each module of the system, and simultaneously start the camera matrix, central control module, shadowless lighting system and display module;
    B.人体手部放置:将人体的手部放置在透明玻璃盖板上,通过调整手部的位置,使手部的信息全部落在信息采集的方位内,由于采用无影灯光系统,各个角度采集的手部信息没有阴影;该设备包括手部虚拟位置,提供人体手部的放置位置说明,确保人体手部整体落在相机矩阵信息采集的范围内;B. Human hand placement: Place the human hand on the transparent glass cover. By adjusting the position of the hand, the information of the hand is all within the orientation of the information collection. Due to the use of the shadowless lighting system, the angles are collected. The hand information is not shaded; the device includes a virtual position of the hand, providing a description of the placement position of the human hand, ensuring that the entire human hand falls within the range of camera matrix information collection;
    C.参数设置:通过显示器界面,可以设定相机矩阵拍照的各项参数;C. Parameter setting: Through the display interface, various parameters of the camera matrix can be set;
    D.信息采集:参数设置完毕,启动相机矩阵开始对手部的信息进行采集,采集的信息会以图片的格式传到中央控制模块进行分析和处理;利用多台可见光相机组成的相机矩阵对生物特征信息进行采集,得到多幅生物特征图像;D. Information collection: After the parameters are set, the camera matrix is started to start the information of the opponent's part, and the collected information will be transmitted to the central control module for analysis and processing in the form of pictures; the camera matrix composed of multiple visible light cameras will be used for biometrics. Information is collected to obtain multiple biometric images;
    对所述多幅生物特征图像进行处理,提取所述多幅生物特征图像中各自的特征点;Processing the plurality of biometric images to extract respective feature points in the plurality of biometric images;
    E.信息处理:相机矩阵采集完的信号传送到中央控制模块进行信号处理,基于提取的所述多幅生物特征图像中各自的特征点,生成生物特征的特征点云数据,包括:根据提取的所述多幅生物特征图像中各自的特征点的特征,进行特征点的匹配,建立匹配的特征点数据集;根据多台可见光相机的光学信息,采用光束平差法计算各台相机相对于生物特征在空间上的相对位置,并根据所述相对位置计算出所述多幅生物特征图像中的特征点的空间深度信息;根据匹配的特征点数据集和特征点的空间深度信息,生成生物特征的特征点云数据;E. Information processing: the signal collected by the camera matrix is transmitted to the central control module for signal processing, and the feature point cloud data of the biometric feature is generated based on the extracted feature points of the plurality of biometric images, including: according to the extracted Characterizing the respective feature points in the plurality of biometric images, performing feature point matching, establishing a matching feature point data set; calculating the respective cameras relative to the living body by using the beam adjustment method according to the optical information of the plurality of visible light cameras Calculating the relative position of the feature in space, and calculating spatial depth information of the feature points in the plurality of biometric images according to the relative position; generating biometrics according to the matched feature point data set and the spatial depth information of the feature points Feature point cloud data;
    包括:include:
    E.1采集图像的滤波E.1 Filtering the acquired image
    由于人体手部的主要特征采集点集中在手部指部的顶端,指端的指纹特征也是具有唯一生物识别的特征,所以在采集到手部指部的特征点后,首先需要将非指端的信息采用算法的方法滤掉,整个算法的整体思路如下:Since the main feature collection point of the human hand is concentrated on the top of the hand finger, the fingertip fingerprint feature also has the unique biometric feature. Therefore, after collecting the feature points of the hand finger, the non-finger end information needs to be adopted first. The algorithm's method is filtered out. The overall idea of the whole algorithm is as follows:
    E.1.1建立指端和第二指部的关节纹的库文件以及指部关节纹的特征库;E.1.1 establishing a library file of the joint pattern of the finger end and the second finger and a feature library of the finger joint pattern;
    E.1.2导入特征库针对指部采集到的信息进行特征识别;E.1.2 Importing the feature library to perform feature recognition on the information collected by the finger;
    E.1.3特征识别后,针对特征区的区域进行计算,计算出指部指端的特征区的范围;E.1.3 After feature recognition, calculate the area of the feature area and calculate the range of the feature area of the fingertip;
    E.1.4特征区和非指部特征区的图像分割;E.1.4 Image segmentation of the feature area and the non-finger feature area;
    E.1.5非指部特征区的信息从原始图像剔除;E.1.5 The information of the non-finger feature area is removed from the original image;
    E.1.6新特征区域的信息做进一步的滤波处理;E.1.6 The information of the new feature area is further filtered;
    E.2采集图像的特征点提取;E.2 feature point extraction of the acquired image;
    E.3采集图像的匹配和空间深度信息的计算;E.3 acquisition of image matching and calculation of spatial depth information;
    E.4特征点云数据的生成;E.4 generation of feature point cloud data;
    根据所述特征点云数据构建生物特征的3D模型,以实现生物特征3D数据的采集;包括:设定待构建的3D模型的参考尺寸;根据所述参考尺寸和所述特征点云数据的空间位置信息,确定所述特征点云数据中各个特征点的空间尺寸,从而构建生物特征的3D模型;Constructing a 3D model of the biometric according to the feature point cloud data to implement acquisition of the biometric 3D data; comprising: setting a reference size of the 3D model to be constructed; and calculating a space according to the reference size and the feature point cloud data Position information, determining a spatial size of each feature point in the feature point cloud data, thereby constructing a 3D model of the biometric feature;
    记录多台可见光相机采集生物特征信息的时间数据,从而根据特征点云数据和时间数据,构建具有时间维度的生物特征的3D模型,以实现生物特征四维数据的采集。The time data of biometric information collected by multiple visible light cameras is recorded, and a 3D model of biometrics with time dimension is constructed according to the feature point cloud data and time data to realize the collection of biometric four-dimensional data.
  2. 根据权利要求1所述的方法,其特征在于,所述多幅生物特征图像中各自的特征点的特征采用尺度不变特征转换SIFT特征描述子来描述。The method according to claim 1, wherein the features of the respective feature points in the plurality of biometric images are described by using a scale invariant feature transform SIFT feature descriptor.
  3. 根据权利要求1中所述的方法,其特征在于,所述多幅生物特征图像中的特征点的空间深度信息包括:空间位置信息和颜色信息。The method according to claim 1, wherein the spatial depth information of the feature points in the plurality of biometric images comprises: spatial position information and color information.
  4. 根据权利要求1所述的方法,其特征在于,所述生物特征的3D模型中包括下列至少之一的3D数据:The method of claim 1 wherein the 3D model of the biometric comprises at least one of the following 3D data:
    描述3D模型的空间形状特征数据;Describe the spatial shape feature data of the 3D model;
    描述3D模型的表面纹理特征数据;Describe the surface texture feature data of the 3D model;
    描述3D模型的表面材质和灯光特征数据。Describe the surface material and lighting feature data of the 3D model.
  5. 根据权利要求1所述的方法,其特征在于,在利用多台可见光相机组成的相机矩阵对生物特征信息进行采集之前,所述方法还包括通过以下方式布局多台可见光相机:The method of claim 1 wherein prior to acquiring biometric information using a camera matrix comprised of a plurality of visible light cameras, the method further comprising arranging the plurality of visible light cameras in the following manner:
    搭建支撑结构,在所述支撑结构上设置弧形承载结构;Constructing a support structure, and setting an arc-shaped load-bearing structure on the support structure;
    将多台可见光相机布置在所述弧形承载结构上。A plurality of visible light cameras are disposed on the curved load bearing structure.
  6. 根据权利要求5所述的方法,其特征在于,所述支撑结构为柜体,所述弧形承载结构设置在所述柜体内,所述方法还包括:The method according to claim 5, wherein the support structure is a cabinet, and the arc-shaped load-bearing structure is disposed in the cabinet, the method further comprising:
    在所述柜体上面向多台可见光相机的镜头的一面设置透明玻璃盖板;Providing a transparent glass cover on one side of the lens facing the lens of the plurality of visible light cameras;
    当生物的手部放置在所述透明玻璃盖板上时,利用布置在所述弧形承载结构上的多台可见光相机组成的相机矩阵对手部信息进行采集。When the hand of the creature is placed on the transparent glass cover, the camera matrix hand-held information composed of a plurality of visible light cameras disposed on the curved load-bearing structure is used for acquisition.
  7. 根据权利要求6所述的方法,其特征在于,还包括:The method of claim 6 further comprising:
    在利用多台可见光相机组成的相机矩阵对手部信息进行采集之前,通过显示器界面,设定各台相机的拍照参数。Before taking advantage of the camera matrix hand-held information composed of multiple visible light cameras, the camera parameters of each camera are set through the display interface.
  8. 根据权利要求1-7任一项所述的方法,其特征在于,采用多目视觉深度计算方法,具体包括:The method according to any one of claims 1 to 7, wherein the method for calculating a multi-vision visual depth comprises:
    使用多台可见光相机组成的相机矩阵对生物特征信息进行采集,得到多幅生物特征图像;The biometric information is collected by using a camera matrix composed of multiple visible light cameras to obtain a plurality of biometric images;
    将所述多幅生物特征图像传送到具有图像处理器GPU和中央处理器CPU的处理单元;Transmitting the plurality of biometric images to a processing unit having an image processor GPU and a central processing unit CPU;
    将所述多幅生物特征图像的图像信息分配到GPU的块block中进行运算,并结合CPU的集中调度和分配功能,计算所述多幅生物特征图像各自的特征点。The image information of the plurality of biometric images is allocated to a block block of the GPU for calculation, and combined with the centralized scheduling and allocation function of the CPU, the feature points of the plurality of biometric images are calculated.
  9. 根据权利要求8所述的方法,其特征在于,所述GPU为双GPU,每颗GPU具有多个block。The method according to claim 8, wherein the GPU is a dual GPU, and each GPU has a plurality of blocks.
  10. 一种生物特征3D数据识别方法,其特征在于,包括如下步骤:A biometric 3D data identification method, comprising the steps of:
    S01.采集生物特征信息,S01. Collecting biometric information,
    通过可见光相机采集生物体的多幅生物特征图像,根据所述多幅生物特征图像构建生物特征的3D模型,以实现所述生物体的生物特征3D数据采集;Collecting a plurality of biometric images of the living body by using a visible light camera, and constructing a 3D model of the biometrics according to the plurality of biometric images to implement biometric 3D data collection of the living body;
    S02.存储生物特征3D数据,S02. storing biometric 3D data,
    以生物体的身份信息(I1、I2…In)作为识别标志对采集到的生物特征3D数据进行存储,形成包括多条生物特征3D数据(D1、D2…Dn)的数据库;Collecting the collected biometric 3D data by using the identity information (I1, I2...In) of the living body as an identification mark to form a database including a plurality of biometric 3D data (D1, D2...Dn);
    S03.目标生物体的身份识别,S03. Identification of the target organism,
    采集目标生物体的生物特征3D数据(T1、T2…Tn),利用所述目标生物体的身份信息(I1、I2…In)找到所述数据库中存储的生物特征3D数据(D1、D2…Dn),将所述目标生物体的生物特征3D数据(T1、T2…Tn)分别与相应的所述数据库中存储的生物特征3D数据(D1、D2…Dn)进行比对,以识别目标生物体的身份;Collecting biometric 3D data (T1, T2...Tn) of the target organism, and using the identity information (I1, I2...In) of the target organism to find biometric 3D data stored in the database (D1, D2...Dn) Correlating the biometric 3D data (T1, T2...Tn) of the target organism with the biometric 3D data (D1, D2...Dn) stored in the corresponding database to identify the target organism identity of;
    所述比对方法包括如下具体步骤:采用基于空域直接匹配的方法进行特征点拟合,在两个点云的对应的刚性区域,选取三个及以上特征点作为拟合关键点,通过坐标变换,直接进行特征点对应匹配;给定两个点云粗略的初始对齐条件,寻求两者之间的刚性变换以最小化对齐误差;The comparison method comprises the following specific steps: performing feature point fitting based on spatial direct matching method, and selecting three or more feature points as matching key points in the corresponding rigid regions of the two point clouds, through coordinate transformation Directly perform feature point corresponding matching; given initial coarse alignment conditions of two point clouds, seek rigid transformation between the two to minimize alignment error;
    特征点对应匹配后,整体曲面最佳拟合后的点云的数据对齐;After the feature points are matched, the data of the point cloud after the best fit of the overall surface is aligned;
    采用最小二乘法进行相似度计算;The least squares method is used to calculate the similarity;
    步骤S01还包括:Step S01 further includes:
    利用多台可见光相机组成相机矩阵对生物体的生物特征信息进行采集,通过以下方式布局相机矩阵:The biometric information of the living body is collected by using a plurality of visible light cameras to form a camera matrix, and the camera matrix is laid out by:
    搭建支撑结构,在所述支撑结构上设置弧形承载结构;Constructing a support structure, and setting an arc-shaped load-bearing structure on the support structure;
    将多台可见光相机布置在所述弧形承载结构上;Arranging a plurality of visible light cameras on the curved load bearing structure;
    通过多台可见光相机采集得到生物体的多幅生物特征图像,Collecting multiple biometric images of the organism through multiple visible light cameras,
    对所述多幅生物特征图像进行处理,提取所述多幅生物特征图像中各自的特征点;Processing the plurality of biometric images to extract respective feature points in the plurality of biometric images;
    基于提取的所述多幅生物特征图像中各自的特征点,生成生物特征的特征点云数据;Generating feature point cloud data of the biometric based on the extracted feature points in the plurality of biometric images;
    根据所述特征点云数据构建生物特征的3D模型,以实现生物特征3D数据的采集;Constructing a 3D model of biometrics according to the feature point cloud data to implement acquisition of biometric 3D data;
    所述基于提取的所述多幅生物特征图像中各自的特征点,生成生物特征的特征点云数据的步骤进一步包括:And the step of generating the feature point cloud data of the biometric based on the extracted feature points in the plurality of biometric images further includes:
    根据提取的所述多幅生物特征图像中各自的特征点的特征,进行特征点的匹配,建立匹配的特征点数据集;Performing matching of feature points according to the extracted features of the feature points in the plurality of biometric images to establish a matching feature point data set;
    根据可见光相机的光学信息,计算各台可见光相机相对于生物特征在空间上的相对位置,并根据所述相对位置计算出所述多幅生物特征图像中的特征点的空间深度信息;Calculating, according to optical information of the visible light camera, spatial relative positions of the visible light cameras with respect to the biological features, and calculating spatial depth information of the feature points in the plurality of biometric images according to the relative positions;
    根据匹配的特征点数据集和特征点的空间深度信息,生成生物特征的特征点云数据;Generating feature point cloud data of the biometric according to the matched feature point data set and the spatial depth information of the feature point;
    所述多幅生物特征图像中各自的特征点的特征采用尺度不变特征转换SIFT特征描述子来描述;The features of the respective feature points in the plurality of biometric images are described by using a scale invariant feature transform SIFT feature descriptor;
    根据多台可见光相机的光学信息,采用光束平差法计算各台可见光相机相对于生物特征在空间上的相对位置;According to the optical information of multiple visible light cameras, the relative position of each visible light camera relative to the biological feature is calculated by the beam adjustment method;
    所述多幅生物特征图像中的特征点的空间深度信息包括:空间位置信息和颜色信息;The spatial depth information of the feature points in the plurality of biometric images includes: spatial location information and color information;
    所述根据所述特征点云数据构建生物特征的3D模型的步骤进一步包括:The step of constructing a 3D model of the biometric according to the feature point cloud data further includes:
    设定待构建的3D模型的参考尺寸;Setting a reference size of the 3D model to be constructed;
    根据所述参考尺寸和所述特征点云数据的空间位置信息,确定所述特征点 云数据中各个特征点的空间尺寸,从而构建生物特征的3D模型;Determining, according to the reference size and the spatial location information of the feature point cloud data, a spatial size of each feature point in the feature point cloud data, thereby constructing a 3D model of the biometric feature;
    所述生物特征的3D模型中包括下列至少之一的3D数据:The 3D model of the biometric includes at least one of the following 3D data:
    描述3D模型的空间形状特征数据;Describe the spatial shape feature data of the 3D model;
    描述3D模型的表面纹理特征数据;Describe the surface texture feature data of the 3D model;
    描述3D模型的表面材质和灯光特征数据;Describe the surface material and lighting feature data of the 3D model;
    所述生物特征信息为头部信息、面部信息和/或虹膜信息,则所述方法还包括:The biometric information is head information, facial information, and/or iris information, and the method further includes:
    搭建与所述支撑结构连接的底座,在所述底座上设置用于人体拍照位置的座椅;Constructing a base connected to the supporting structure, and setting a seat for taking a photographing position of the human body on the base;
    当人体位于所述座椅上时,利用布置在所述弧形承载结构上的多台可见光相机组成的相机矩阵对人体的头部信息、面部信息和/或虹膜信息进行采集。When the human body is positioned on the seat, the head information, facial information, and/or iris information of the human body is collected using a camera matrix composed of a plurality of visible light cameras disposed on the curved load bearing structure.
  11. 根据权利要求10所述的方法,其特征在于,所述生物体为人体,所述身份信息包括:姓名、性别、年龄和证件号中的一种或多种。The method according to claim 10, wherein the living body is a human body, and the identity information comprises one or more of a name, a gender, an age, and a document number.
  12. 根据权利要求11所述的方法,其特征在于,所述证件号包括身份证号、护照号、驾照号、社保号或军官证号中的一种或多种。The method according to claim 11, wherein the document number comprises one or more of an identity card number, a passport number, a driver's license number, a social security number, or a military officer number.
  13. 根据权利要求10所述的方法,其特征在于,The method of claim 10 wherein:
    在所述弧形承载结构上设置显示器;Providing a display on the curved carrying structure;
    在构建得到头部、面部和/或虹膜的3D模型后,在显示器上通过可视化方式显示3D数据;After constructing a 3D model of the head, face and/or iris, visually displaying the 3D data on the display;
    在利用多台可见光相机组成的相机矩阵对头部信息、面部信息和/或虹膜信息进行采集之前,通过显示器界面,设定各台可见光相机的拍照参数。Before the head information, the face information, and/or the iris information are collected by using a camera matrix composed of a plurality of visible light cameras, the photographing parameters of each visible light camera are set through the display interface.
PCT/CN2019/074455 2018-02-14 2019-02-01 Biological feature 3d data acquisition method and biological feature 3d data recognition method WO2019157989A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201810152242.0A CN108446597B (en) 2018-02-14 2018-02-14 A kind of biological characteristic 3D collecting method and device based on Visible Light Camera
CN201810152242.0 2018-02-14
CN201810211276.2A CN108416312B (en) 2018-03-14 2018-03-14 A kind of biological characteristic 3D data identification method taken pictures based on visible light
CN201810211276.2 2018-03-14

Publications (1)

Publication Number Publication Date
WO2019157989A1 true WO2019157989A1 (en) 2019-08-22

Family

ID=67618877

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/074455 WO2019157989A1 (en) 2018-02-14 2019-02-01 Biological feature 3d data acquisition method and biological feature 3d data recognition method

Country Status (1)

Country Link
WO (1) WO2019157989A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113065502A (en) * 2019-12-12 2021-07-02 天目爱视(北京)科技有限公司 3D information acquisition system based on standardized setting
CN113160287A (en) * 2021-03-17 2021-07-23 华中科技大学 Complex component point cloud splicing method and system based on feature fusion
CN113344986A (en) * 2021-08-03 2021-09-03 深圳市信润富联数字科技有限公司 Point cloud registration result evaluation method, device, equipment and storage medium
CN114724261A (en) * 2022-04-15 2022-07-08 澜途集思生态科技集团有限公司 SNIP algorithm-based ecological organism identification method
RU2789609C1 (en) * 2021-12-17 2023-02-06 Ооо "Мирп-Ис" Method for tracking, detection and identification of objects of interest and autonomous device with protection from copying and hacking for their implementation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102201061A (en) * 2011-06-24 2011-09-28 常州锐驰电子科技有限公司 Intelligent safety monitoring system and method based on multilevel filtering face recognition
US20160093055A1 (en) * 2014-09-29 2016-03-31 Canon Kabushiki Kaisha Information processing apparatus, method for controlling same, and storage medium
CN108416312A (en) * 2018-03-14 2018-08-17 天目爱视(北京)科技有限公司 A kind of biological characteristic 3D data identification methods and system taken pictures based on visible light
CN108446597A (en) * 2018-02-14 2018-08-24 天目爱视(北京)科技有限公司 A kind of biological characteristic 3D collecting methods and device based on Visible Light Camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102201061A (en) * 2011-06-24 2011-09-28 常州锐驰电子科技有限公司 Intelligent safety monitoring system and method based on multilevel filtering face recognition
US20160093055A1 (en) * 2014-09-29 2016-03-31 Canon Kabushiki Kaisha Information processing apparatus, method for controlling same, and storage medium
CN108446597A (en) * 2018-02-14 2018-08-24 天目爱视(北京)科技有限公司 A kind of biological characteristic 3D collecting methods and device based on Visible Light Camera
CN108416312A (en) * 2018-03-14 2018-08-17 天目爱视(北京)科技有限公司 A kind of biological characteristic 3D data identification methods and system taken pictures based on visible light

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HUANG FU: "Research on facial recognition system based on 3d modeling", CHINESE MASTER'S THESES, no. 5, 15 May 2011 (2011-05-15), pages 1138 - 1878 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113065502A (en) * 2019-12-12 2021-07-02 天目爱视(北京)科技有限公司 3D information acquisition system based on standardized setting
CN113160287A (en) * 2021-03-17 2021-07-23 华中科技大学 Complex component point cloud splicing method and system based on feature fusion
CN113160287B (en) * 2021-03-17 2022-04-22 华中科技大学 Complex component point cloud splicing method and system based on feature fusion
CN113344986A (en) * 2021-08-03 2021-09-03 深圳市信润富联数字科技有限公司 Point cloud registration result evaluation method, device, equipment and storage medium
CN113344986B (en) * 2021-08-03 2021-11-09 深圳市信润富联数字科技有限公司 Point cloud registration result evaluation method, device, equipment and storage medium
RU2789609C1 (en) * 2021-12-17 2023-02-06 Ооо "Мирп-Ис" Method for tracking, detection and identification of objects of interest and autonomous device with protection from copying and hacking for their implementation
CN114724261A (en) * 2022-04-15 2022-07-08 澜途集思生态科技集团有限公司 SNIP algorithm-based ecological organism identification method

Similar Documents

Publication Publication Date Title
CN108549873A (en) Three-dimensional face identification method and three-dimensional face recognition system
WO2019157989A1 (en) Biological feature 3d data acquisition method and biological feature 3d data recognition method
US20150347833A1 (en) Noncontact Biometrics with Small Footprint
KR101007276B1 (en) Three dimensional face recognition
Paysan et al. A 3D face model for pose and illumination invariant face recognition
Wechsler Reliable Face Recognition Methods: System Design, Impementation and Evaluation
US7512255B2 (en) Multi-modal face recognition
JP4284664B2 (en) Three-dimensional shape estimation system and image generation system
JP6207210B2 (en) Information processing apparatus and method
CN108416312B (en) A kind of biological characteristic 3D data identification method taken pictures based on visible light
JP4780198B2 (en) Authentication system and authentication method
CN109670390A (en) Living body face recognition method and system
CN109766876A (en) Contactless fingerprint acquisition device and method
JP2007058393A (en) Authentication device, authentication method and program
CN108446597B (en) A kind of biological characteristic 3D collecting method and device based on Visible Light Camera
CN108470166A (en) A kind of biological characteristic 3D 4 D datas recognition methods and system based on laser scanning
CN108550184A (en) A kind of biological characteristic 3D 4 D datas recognition methods and system based on light-field camera
JP2007058401A (en) Authentication device, authentication method, and program
CN108520230A (en) A kind of 3D four-dimension hand images data identification method and equipment
Ferková et al. Age and gender-based human face reconstruction from single frontal image
Bastias et al. A method for 3D iris reconstruction from multiple 2D near-infrared images
US11734948B2 (en) Device and method for contactless fingerprint acquisition
CN209401042U (en) Contactless fingerprint acquisition device
Abate et al. Fast 3D face recognition based on normal map
CN108470150A (en) A kind of biological characteristic 4 D data acquisition method and device based on Visible Light Camera

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19754595

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 30/11/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 19754595

Country of ref document: EP

Kind code of ref document: A1