CN114356102A - Three-dimensional object absolute attitude control method and device based on fingerprint image - Google Patents

Three-dimensional object absolute attitude control method and device based on fingerprint image Download PDF

Info

Publication number
CN114356102A
CN114356102A CN202210114014.0A CN202210114014A CN114356102A CN 114356102 A CN114356102 A CN 114356102A CN 202210114014 A CN202210114014 A CN 202210114014A CN 114356102 A CN114356102 A CN 114356102A
Authority
CN
China
Prior art keywords
dimensional
fingerprint
finger
verification
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210114014.0A
Other languages
Chinese (zh)
Inventor
冯建江
周杰
段永杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202210114014.0A priority Critical patent/CN114356102A/en
Publication of CN114356102A publication Critical patent/CN114356102A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • G06F3/0383Signal control means within the pointing device

Abstract

The invention provides a three-dimensional object absolute attitude control method and device based on fingerprint images, wherein the method comprises the steps of collecting the fingerprint image of the current finger in real time; according to the sequence images, the three-dimensional posture of the current finger is presumed; mapping the three-dimensional posture of the current finger into a three-dimensional input signal, and controlling the absolute posture of a target object in a three-dimensional space according to the three-dimensional input signal; and taking the absolute attitude obtained after the control as a target attitude, and calculating a transformation path for transforming the controlled object to the target attitude. The three-dimensional absolute attitude control method can effectively develop the existing man-machine interaction mode and provides convenience for controlling the attitude of the object in the three-dimensional space.

Description

Three-dimensional object absolute attitude control method and device based on fingerprint image
Technical Field
The invention relates to the field of human-computer interaction, in particular to a three-dimensional attitude control problem.
Background
With the continuous development of human-computer interaction technology, the interaction mode between human beings and machines has been changed and developed continuously. In the fields of games, Virtual Reality (VR), security monitoring, spacecraft control, vehicle control, robot control, three-dimensional design and the like, objects such as cameras, 3D models, airplanes, automobiles and the like mostly exist in a three-dimensional space, so that it is a very important requirement that the three-dimensional attitude of a target object can be accurately and efficiently controlled. In the existing technology, there are few object posture control systems that directly output three-dimensional control signals, a conventional mouse is used as a common control input device, and then the posture of a target object is controlled by controlling the moving direction, speed and displacement distance of the mouse on a two-dimensional plane. Intuitively speaking, the direct use of control signals in three-dimensional space is more in line with the intuition of human use, thus being capable of improving the precision and speed of the control system to a great extent.
In the existing three-dimensional target object attitude control scheme, the following limitations and disadvantages still exist:
the three-dimensional attitude control system based on the 2D control signal needs to separately design the mapping relation between the 2D signal and the 3D attitude control, so that the control mode is not in line with the intuitive feeling of human beings, a user needs to learn for a certain time and degree before use, and the attitude change of a target object needs to be monitored constantly to sense the current control feedback in the use process;
besides the traditional mouse, a 3D mouse inspired by spacecraft control is also presented to deal with a complex three-dimensional attitude control scene, but most of the devices need complex and fine mechanical structure design, so that the manufacturing cost is expensive, the devices are difficult to fuse with the existing human-computer interaction devices, and the popularization of large-area consumer markets is not facilitated.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
To this end, a first object of the present invention is to provide a fingerprint image-based three-dimensional object absolute attitude control method for facilitating object attitude control in a three-dimensional space.
The second purpose of the invention is to provide a three-dimensional object absolute attitude control device based on fingerprint images.
In order to achieve the above object, an embodiment of a first aspect of the present invention provides a method for controlling an absolute pose of a three-dimensional object based on a fingerprint image, including: collecting a fingerprint image of a current finger in real time; according to the sequence images, the three-dimensional posture of the current finger is presumed; mapping the three-dimensional posture of the current finger into a three-dimensional input signal, and controlling the absolute posture of a target object in a three-dimensional space according to the three-dimensional input signal; and taking the absolute attitude obtained after the control as a target attitude, and calculating a transformation path for transforming the controlled object to the target attitude.
According to the fingerprint image-based absolute posture control method for the three-dimensional object, provided by the embodiment of the invention, the fingerprint image of a certain finger collected on certain interaction equipment can be given, the three-dimensional posture of the finger can be predicted firstly, and the three-dimensional posture of the finger can be used for controlling a target object in a real world or a virtual world, so that the method is used as a novel human-computer interaction system. According to the method, the three-dimensional gesture of the current finger is predicted according to the 2D fingerprint image acquired by the sensor, and then the three-dimensional gesture of the finger is used as an input control signal to perform gesture control on a target object in the real world or the virtual world. In the invention, the input of the three-dimensional absolute attitude control system is a 2D fingerprint image, and the output is the three-dimensional attitude of the current finger, which is used for controlling the three-dimensional absolute attitude of the target object. The three-dimensional absolute attitude control system can effectively expand the existing man-machine interaction mode and provides convenience for controlling the attitude of an object in a three-dimensional space.
In addition, the fingerprint image-based three-dimensional object absolute attitude control method according to the above-described embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the acquiring a fingerprint image of a current finger in real time further includes:
and preprocessing the fingerprint sequence image, removing background noise and enhancing fingerprint ridges.
Further, in an embodiment of the present invention, mapping the three-dimensional posture of the current finger to a three-dimensional input signal, and controlling an absolute posture of the target object in a three-dimensional space according to the three-dimensional input signal includes:
flexibly selecting the mapping relation between the gesture of the finger and the gesture of the controlled object according to the requirement, wherein the mapping relation comprises that the roll angle of the finger is mapped into the yaw angle of the object, the pitch angle of the finger is mapped into the roll angle of the object, and a mapping function with the same precision is used;
two angles of the finger are simultaneously mapped to the same angle of the object, and mapping functions with different precisions are used.
Further, in an embodiment of the present invention, before controlling the absolute pose of the target object in the three-dimensional space according to the three-dimensional input signal, the method further includes:
performing fingerprint identification verification on the fingerprint image, including frame-by-frame verification and first frame verification; the frame-by-frame verification comprises that each frame of fingerprint image is required to be subjected to fingerprint identification and verification, and subsequent object pose control is allowed to be carried out only after the fingerprint identification and verification are passed; the first frame of verification comprises fingerprint identification of the first frame of image of the fingerprint image, and after verification is passed, the finger is kept from leaving the control equipment, so that subsequent object pose control is continuously carried out.
Further, in an embodiment of the present invention, the method further includes:
setting respective angle mapping functions for different fingerprint images, and calling the corresponding angle mapping functions based on the fingerprint identification result.
In order to achieve the above object, a second embodiment of the present invention provides a fingerprint image-based absolute pose control method for a three-dimensional object, which includes an acquiring module, configured to acquire a fingerprint image of a current finger in real time; the prediction module is used for deducing the three-dimensional posture of the current finger according to the sequence image; the mapping module is used for mapping the three-dimensional posture of the current finger into a three-dimensional input signal and controlling the absolute posture of the target object in a three-dimensional space according to the three-dimensional input signal; and the resolving module is used for taking the absolute attitude obtained after the control as a target attitude and calculating a transformation path for transforming the controlled object to the target attitude.
Further, in an embodiment of the present invention, the acquisition module is further configured to:
and preprocessing the fingerprint sequence image, removing background noise and enhancing fingerprint ridges.
Further, in an embodiment of the present invention, the mapping module is further configured to:
before controlling the absolute pose of the target object in three-dimensional space according to the three-dimensional input signal,
performing fingerprint identification verification on the fingerprint image, including frame-by-frame verification and first frame verification; the frame-by-frame verification comprises that each frame of fingerprint image is required to be subjected to fingerprint identification and verification, and subsequent object pose control is allowed to be carried out only after the fingerprint identification and verification are passed; the first frame of verification comprises fingerprint identification of the first frame of image of the fingerprint image, and after verification is passed, the finger is kept from leaving the control equipment, so that subsequent object pose control is continuously carried out.
Further, in an embodiment of the present invention, the apparatus further includes a verification module, configured to:
setting respective angle mapping functions for different fingerprint images, and calling the corresponding angle mapping functions based on the fingerprint identification result.
Further, in an embodiment of the present invention, the apparatus further includes a personalization control module, configured to:
by combining the fingerprint identification technology with the three-dimensional pose control based on the fingerprint, individualized control parameters can be realized.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a method for controlling an absolute pose of a three-dimensional object based on a fingerprint image according to an embodiment of the present invention.
Fig. 2 is a schematic flowchart of a three-dimensional object absolute attitude control apparatus based on a fingerprint image according to an embodiment of the present invention.
Fig. 3 is a flowchart of a three-dimensional absolute pose control system based on a fingerprint image according to an embodiment of the present invention.
Fig. 4 is a schematic diagram illustrating a finger gesture definition according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a coverage area of a common finger gesture provided in an embodiment of the present invention.
Fig. 6 is a schematic diagram of a linear control signal curve according to an embodiment of the present invention.
Fig. 7 is a schematic diagram of a nonlinear control signal curve according to an embodiment of the present invention.
Fig. 8 is a schematic diagram of a pose control system for frame-by-frame verification according to an embodiment of the present invention.
Fig. 9 is a schematic diagram of an attitude control system for first frame verification according to an embodiment of the present invention.
Fig. 10 is a schematic diagram of an adaptive gesture control system based on fingerprint verification according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The method and apparatus for controlling the absolute pose of a three-dimensional object based on a fingerprint image according to an embodiment of the present invention will be described with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a method for controlling an absolute pose of a three-dimensional object based on a fingerprint image according to an embodiment of the present invention.
As shown in fig. 1, the method for controlling the absolute posture of the three-dimensional object based on the fingerprint image comprises the following steps:
s1: collecting a fingerprint image of a current finger in real time;
with the continuous development of fingerprint sensing technology, besides the traditional optical fingerprint collection mode, the fingerprint collection device can be integrated in various existing touch screen devices to obtain fingerprint images. The ridge line information of the fingerprint contains abundant finger information including finger posture information during fingerprint collection, so that the three-dimensional posture of the finger can be estimated according to the fingerprint image. The gesture of the finger is used as a three-dimensional input signal, so that the three-dimensional gesture of a target object can be conveniently and intuitively controlled, wherein the target object comprises objects needing multi-degree-of-freedom control, such as a camera, a 3D model, a mechanical arm, a vehicle, a spacecraft and the like. Meanwhile, the finger gesture signal based on the fingerprint image can be combined with various human-computer interaction technologies (such as gesture operation based on a touch screen), so that the diversity of interaction modes is further expanded, and the control system combined with the fingerprint identity information also has higher safety and personalized setting.
The invention is divided into three stages, namely fingerprint image acquisition, finger gesture acquisition and target object gesture control. In the first stage, acquiring a fingerprint image in fingerprint acquisition equipment, wherein the acquisition of the fingerprint image is carried out in real time because human-computer interaction generally needs higher real-time performance; in the second stage, predicting the three-dimensional posture of the current finger based on the acquired fingerprint image, and inputting the three-dimensional posture as a signal of a subsequent control system; in the third stage, the obtained three-dimensional gesture of the finger is used as a control signal for controlling the absolute gesture of the target object in the three-dimensional space. The first stage is the basic step of the invention, and the acquired fingerprint image contains the three-dimensional posture information of the current finger; estimating the three-dimensional posture of the finger according to the fingerprint image at the second stage; and in the third stage, the predicted three-dimensional gesture of the finger is used as a gesture control signal of the control system, so that the target gesture of the controlled object in the three-dimensional space is specified, and the controlled object achieves the target three-dimensional gesture specified by the finger of the user through gesture path planning. The method comprises the steps of fingerprint image acquisition, finger three-dimensional posture estimation based on the fingerprint images and target object three-dimensional posture control based on the finger postures. The flow chart of the invention is shown in figure 3. The invention also provides a method for realizing the security and individuation of control by combining fingerprint identification and attitude control.
Compared with a capacitive image in the traditional touch screen equipment, the fingerprint image contains more finger shapes and posture information, so that the method can be used for accurately estimating the posture of the finger in a three-dimensional space. There are many sensing technologies for acquiring fingerprint images, such as an optical fingerprint acquirer, an optical underscreen fingerprint acquirer, and an ultrasonic underscreen fingerprint acquirer, and the present invention is applicable to various fingerprint sensing technologies. Since the three-dimensional pose of the target object needs to be accurately controlled in real time by the estimated three-dimensional pose of the finger, it is necessary to acquire the fingerprint image data in real time, that is, to acquire the fingerprint sequence image data.
S2: according to the sequence images, the three-dimensional posture of the current finger is presumed; mapping the three-dimensional posture of the current finger into a three-dimensional input signal, and controlling the absolute posture of a target object in a three-dimensional space according to the three-dimensional input signal;
further, in an embodiment of the present invention, the acquiring a fingerprint image of a current finger in real time further includes:
and preprocessing the fingerprint sequence image, removing background noise and enhancing fingerprint ridges.
According to the information such as the fingerprint ridge line direction and the fingerprint area shape in the fingerprint image, the three-dimensional posture of the current finger can be estimated. Only a typical finger pose estimation algorithm is described below to aid in understanding this stage of the present invention. Specifically, according to the acquired fingerprint image, the three-dimensional posture of the current finger is estimated to serve as a subsequent control signal.
Because the fingerprint image modalities and qualities obtained by different sensors have great difference, the acquired images need to be preprocessed firstly, background noise is removed, and fingerprint ridges are enhanced. And estimating the three-dimensional posture of the current finger according to the preprocessed fingerprint image. In the invention, the gesture estimation algorithm based on deep learning is taken as an example, the input of the algorithm is a fingerprint image subjected to image preprocessing, the output is a predicted finger three-dimensional gesture, the finger three-dimensional gesture is defined as an attached figure 4 and is represented by three angles, namely a roll angle alpha, a pitch angle beta and a yaw angle gamma.
S3: mapping the three-dimensional posture of the current finger into a three-dimensional input signal, and controlling the absolute posture of a target object in a three-dimensional space according to the three-dimensional input signal;
further, in an embodiment of the present invention, mapping the three-dimensional posture of the current finger to a three-dimensional input signal, and controlling an absolute posture of the target object in a three-dimensional space according to the three-dimensional input signal includes:
flexibly selecting the mapping relation between the gesture of the finger and the gesture of the controlled object according to the requirement, wherein the mapping relation comprises that the roll angle of the finger is mapped into the yaw angle of the object, the pitch angle of the finger is mapped into the roll angle of the object, and a mapping function with the same precision is used;
two angles of the finger are simultaneously mapped to the same angle of the object, and mapping functions with different precisions are used.
The absolute attitude control of the target object is divided into two steps, namely mapping of the finger attitude and the object attitude control and attitude transformation calculation.
Firstly, obtaining the three-dimensional gesture P ═ P (P) of the finger according to the fingerprint imageα,pβ,pγ) This pose may then be mapped to a three-dimensional control signal for controlling the three-dimensional pose of the target object (e.g., camera, 3D model, etc.). Considering the difference of finger operation habits of different users in object posture control, the finger three-dimensional posture zero point P needs to be firstly appointed0The definition of the zero point is various, and can be set to a fixed value (for example, roll angle 0 °, pitch angle-45 ° and yaw angle 0 °); it is also possible to select a specific frame of fingerprint image (e.g., the first frame of image obtained at the beginning of the interaction), and then use the three-dimensional pose of the finger estimated based on the image as the zero point P0In the invention, taking the zero point definition mode as an example, the finger posture estimated based on each frame of image in the first computing stage is relative to the posture zero point P0The relative value of (a) to (b),
P′=(p′α,p′β,p′γ),
and maps it to the input signal of the attitude control system. Because the range of gestures that can be conveniently made by the fingers is limited (see figure 5), it isThe three-dimensional absolute posture of the target object can be completely and sufficiently controlled, and the three-dimensional posture of the finger serving as a control signal needs to be mapped on the control range. Setting Q ═ Q (Q)α,qβ,qγ) For example, the following mapping is given for the target pose of the controlled object:
qα=g1(p′α)
qβ=g2(p′β)
qγ=g3(p′γ)
wherein, g1(·)、g2(·)、g3The present invention describes a three-dimensional absolute attitude control system, taking a simple and plain linear mapping function q (x) ═ sx as an example (as shown in fig. 6).
When | s | ═ 1, g (·) is an identity map, and the estimated three-dimensional posture of the finger is directly used as the target posture of the controlled object, in this mapping manner, the posture interval reachable by the finger is limited (as shown in fig. 5), so the controllable three-dimensional posture range is also limited;
when | s | < 1, g (·) is compression mapping, the estimated three-dimensional gesture numerical value of the finger is reduced to be used as the target gesture of the controlled object, and under the mapping mode, the gesture of the controlled object can be changed in a small range due to the change of the gesture of the finger in a large range, so that the gesture of the object can be controlled more finely;
when | s | > 1, g (·) is an amplification mapping, the estimated finger three-dimensional posture value is amplified to be used as the target posture of the controlled object, and in the mapping mode, the posture of the controlled object can be changed in a large range due to the small-range finger posture change, so that the posture of the object can be roughly controlled in a large range and quickly, and the target object can be converted into the desired three-dimensional posture more quickly.
Another mapping is a non-linear mapping, which, for example, as shown in FIG. 7, may quantify the finger gesture into a number of discrete object gestures. At the moment, the object can be quickly and accurately adjusted to be in the common posture without considering the slight difference of the finger postures.
Considering that the same finger has different comfort ranges for different angles (roll, pitch and yaw) and different ranges containing fingerprint information, different angles may use respective mapping functions. For example, since the range in which the pitch angle of the finger can be changed is small (about 90 degrees), the range in which the posture of the manipulated object can be changed can be increased to 360 degrees by using the enlargement mapping of s ═ 4 at the pitch angle. While the effective range of yaw angles may be 360 degrees, the comfort range is about 90 degrees, and an enlarged mapping of s-4 may also be used. And the roll angle ranges around 180 degrees, so an enlarged mapping with s-2 is used.
There are many mapping relations between the finger gesture and the gesture of the controlled object, and the mapping relation can be flexibly selected according to the requirement without keeping the consistency on the name. For example, the roll angle of the finger may be mapped to the yaw angle of the object, the pitch angle of the finger may be mapped to the roll angle of the object, and even two angles of the finger may be mapped to the same angle of the object at the same time, but mapping functions with different accuracies are used, so that attitude control with different accuracies is realized by one finger.
S4: and taking the absolute attitude obtained after the control as a target attitude, and calculating a transformation path for transforming the controlled object to the target attitude.
After the target posture of the controlled object is given, the object needs to be transformed to the target posture specified by the finger. In this step, the posture is changed in a different manner for different controlled objects, for example, when controlling a three-dimensional object in a virtual world, the object can be directly changed to a target posture without considering the limitation of physical laws, but when controlling an object in a real world such as a robot arm or a monitoring camera, since instantaneous posture change cannot be performed, it is necessary to perform path planning according to the target posture and calculate an optimal path required for changing the object to the target posture. The question is a mature question in the field of robot controlHowever, there are many mature algorithms available, and only a simple transformation path is described below for illustration: firstly, calculating the relative attitude transformation of a target attitude to be achieved relative to the current object attitude, and expressing the attitude transformation by using an axial angle: shaft
Figure BDA0003495677150000071
Angle of harmony
Figure BDA0003495677150000072
The representative can rotate the controlled object from the current attitude around the axis omega by an angle theta to reach the target attitude, and according to the attitude transformation formula, the process of object attitude transformation can be smoothly transformed to the target attitude in a certain time interval in a time interpolation mode.
Several common target object control applications of the three-dimensional absolute pose control system of the present invention are exemplified below, and the following objects may exist in a virtual world or a real world. For a three-dimensional object model, the roll angle of a finger can be simply mapped into the roll angle of the object, the pitch angle of the object is mapped into the pitch angle of the object, and the yaw angle of the finger is mapped into the yaw angle of the object; for camera control, the roll angle of the fingers can be mapped as roll rotation of the camera, the pitch angle as up-down pitch of the camera, and the yaw angle as left-right rotation of the camera; for PTZ camera control, the pitch angle of the fingers may be mapped as the up and down pitch of the camera and the yaw angle as the left and right rotation of the camera; for four-rotor aircraft control, the roll angle of the fingers can be mapped into the left-right roll and displacement of the aircraft, and the pitch angle can be mapped into the front-back roll and displacement of the aircraft; for fixed wing aircraft control, the mapping mode is more direct, and the roll angle of a finger can be mapped into roll control of the aircraft and the pitch angle can be mapped into pitch control.
Further, in an embodiment of the present invention, before controlling the absolute pose of the target object in the three-dimensional space according to the three-dimensional input signal, the method further includes:
performing fingerprint identification verification on the fingerprint image, including frame-by-frame verification and first frame verification; the frame-by-frame verification comprises that each frame of fingerprint image is required to be subjected to fingerprint identification and verification, and subsequent object pose control is allowed to be carried out only after the fingerprint identification and verification are passed; the first frame of verification comprises fingerprint identification of the first frame of image of the fingerprint image, and after verification is passed, the finger is kept from leaving the control equipment, so that subsequent object pose control is continuously carried out.
In addition, the security and privacy authority control of the control system can be further widened by combining the existing fingerprint identification technology with the gesture control based on the fingerprint. For example, in a highly confidential control system, only legitimate registered users are allowed to perform control. Two methods of introducing fingerprint recognition are described below. FIG. 8 is a schematic diagram of a frame-by-frame verification scheme, in which each frame of image is verified by the fingerprint recognition system before subsequent object pose control is enabled. Fig. 9 is a schematic diagram of a first frame verification method, at this time, only fingerprint recognition needs to be performed on a first frame image when a finger is pressed down, subsequent object posture control can be continuously performed as long as the finger does not leave the control device after verification is passed, and once the finger leaves, authentication needs to be performed again.
Further, in an embodiment of the present invention, the method further includes:
setting respective angle mapping functions for different fingerprint images, and calling the corresponding angle mapping functions based on the fingerprint identification result.
In addition, by combining fingerprint identification technology with fingerprint-based gesture control, personalized control parameters can be achieved. For example, for a plurality of registered fingerprints (fingerprints of different users or different fingerprints of one user), respective angle mapping functions are set. The corresponding angle mapping function is called based on the result of fingerprint recognition, and the default angle mapping function is used for the unregistered finger (as shown in fig. 10).
According to the fingerprint image-based absolute posture control method for the three-dimensional object, provided by the embodiment of the invention, the fingerprint image of a certain finger collected on certain interaction equipment can be given, the three-dimensional posture of the finger can be predicted firstly, and the three-dimensional posture of the finger can be used for controlling a target object in a real world or a virtual world, so that the method is used as a novel human-computer interaction system. According to the method, the three-dimensional gesture of the current finger is predicted according to the 2D fingerprint image acquired by the sensor, and then the three-dimensional gesture of the finger is used as an input control signal to perform gesture control on a target object in the real world or the virtual world. In the invention, the input of the three-dimensional absolute attitude control system is a 2D fingerprint image, and the output is the three-dimensional attitude of the current finger, which is used for controlling the three-dimensional absolute attitude of the target object. The three-dimensional absolute attitude control system can effectively expand the existing man-machine interaction mode and provides convenience for controlling the attitude of an object in a three-dimensional space.
In order to implement the above embodiments, the present invention further provides a three-dimensional object absolute attitude control apparatus based on a fingerprint image.
Fig. 2 is a schematic structural diagram of a three-dimensional object absolute attitude control apparatus based on a fingerprint image according to an embodiment of the present invention.
As shown in fig. 2, the fingerprint image-based three-dimensional object absolute orientation control apparatus includes: the system comprises an acquisition module 10, a prediction module 20, a mapping module 30 and a resolving module 40, wherein the acquisition module is used for acquiring a fingerprint image of a current finger in real time; the prediction module is used for deducing the three-dimensional posture of the current finger according to the sequence image; the mapping module is used for mapping the three-dimensional posture of the current finger into a three-dimensional input signal and controlling the absolute posture of the target object in a three-dimensional space according to the three-dimensional input signal; and the resolving module is used for taking the absolute attitude obtained after the control as a target attitude and calculating a transformation path for transforming the controlled object to the target attitude.
Further, in an embodiment of the present invention, the acquisition module is further configured to:
and preprocessing the fingerprint sequence image, removing background noise and enhancing fingerprint ridges.
Further, in an embodiment of the present invention, the mapping module is further configured to:
before controlling the absolute pose of the target object in three-dimensional space according to the three-dimensional input signal,
performing fingerprint identification verification on the fingerprint image, including frame-by-frame verification and first frame verification; the frame-by-frame verification comprises that each frame of fingerprint image is required to be subjected to fingerprint identification and verification, and subsequent object pose control is allowed to be carried out only after the fingerprint identification and verification are passed; the first frame of verification comprises fingerprint identification of the first frame of image of the fingerprint image, and after verification is passed, the finger is kept from leaving the control equipment, so that subsequent object pose control is continuously carried out.
Further, in an embodiment of the present invention, the apparatus further includes a verification module, configured to:
setting respective angle mapping functions for different fingerprint images, and calling the corresponding angle mapping functions based on the fingerprint identification result.
Further, in an embodiment of the present invention, the apparatus further includes a personalization control module, configured to:
by combining the fingerprint identification technology with the three-dimensional pose control based on the fingerprint, individualized control parameters can be realized.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A three-dimensional object absolute attitude control method based on fingerprint images is characterized by comprising the following steps:
collecting a fingerprint image of a current finger in real time;
according to the sequence image, the three-dimensional posture of the current finger is presumed;
mapping the three-dimensional posture of the current finger into a three-dimensional input signal, and controlling the absolute posture of a target object in a three-dimensional space according to the three-dimensional input signal;
and taking the absolute attitude obtained after the control as a target attitude, and calculating a transformation path for transforming the controlled object to the target attitude.
2. The method of claim 1, wherein the acquiring of the fingerprint image of the current finger in real time further comprises:
and preprocessing the fingerprint sequence image, removing background noise and enhancing fingerprint ridges.
3. The method of claim 1, wherein mapping the three-dimensional pose of the current finger to a three-dimensional input signal from which to control an absolute pose of a target object in three-dimensional space comprises:
flexibly selecting the mapping relation between the gesture of the finger and the gesture of the controlled object according to the requirement, wherein the mapping relation comprises that the roll angle of the finger is mapped into the yaw angle of the object, the pitch angle of the finger is mapped into the roll angle of the object, and a mapping function with the same precision is used;
two angles of the finger are simultaneously mapped to the same angle of the object, and mapping functions with different precisions are used.
4. The method of claim 1, further comprising, prior to controlling an absolute pose of a target object in three-dimensional space according to the three-dimensional input signal:
performing fingerprint identification verification on the fingerprint image, including frame-by-frame verification and first frame verification; the frame-by-frame verification comprises that each frame of fingerprint image is required to be subjected to fingerprint identification and verification, and subsequent object attitude control is allowed to be carried out only after the fingerprint identification and verification are passed; the first frame verification comprises fingerprint identification of the first frame image of the fingerprint image, and after the first frame image of the fingerprint image passes the verification, the finger is kept from leaving the control equipment, so that subsequent object posture control is continuously carried out.
5. The method of claim 1 or 4, further comprising:
setting respective angle mapping functions for different fingerprint images, and calling corresponding angle mapping functions based on the fingerprint identification result.
6. A three-dimensional object absolute attitude control apparatus based on a fingerprint image, characterized by comprising:
the acquisition module acquires a fingerprint image of a current finger in real time;
the prediction module is used for deducing the three-dimensional posture of the current finger according to the sequence image;
the mapping module is used for mapping the three-dimensional posture of the current finger into a three-dimensional input signal and controlling the absolute posture of a target object in a three-dimensional space according to the three-dimensional input signal;
and the resolving module is used for calculating a transformation path for transforming the controlled object to the target attitude by taking the absolute attitude obtained after control as the target attitude.
7. The apparatus of claim 6, wherein the acquisition module is further configured to:
and preprocessing the fingerprint sequence image, removing background noise and enhancing fingerprint ridges.
8. The apparatus of claim 6, wherein the mapping module is further configured to:
before controlling the absolute pose of the target object in three-dimensional space according to the three-dimensional input signal,
performing fingerprint identification verification on the fingerprint image, including frame-by-frame verification and first frame verification; the frame-by-frame verification comprises that each frame of fingerprint image is required to be subjected to fingerprint identification and verification, and subsequent object pose control is allowed to be carried out only after the fingerprint identification and verification are passed; the first frame verification comprises fingerprint identification of the first frame image of the fingerprint image, and after the verification is passed, the finger is kept from leaving the control equipment, so that subsequent object pose control is continuously carried out.
9. The apparatus of claim 6, further comprising a verification module to:
setting respective angle mapping functions for different fingerprint images, and calling corresponding angle mapping functions based on the fingerprint identification result.
10. The apparatus of claim 6, further comprising a personalization control module to:
by combining the fingerprint identification technology with the three-dimensional pose control based on the fingerprint, individualized control parameters can be realized.
CN202210114014.0A 2022-01-30 2022-01-30 Three-dimensional object absolute attitude control method and device based on fingerprint image Pending CN114356102A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210114014.0A CN114356102A (en) 2022-01-30 2022-01-30 Three-dimensional object absolute attitude control method and device based on fingerprint image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210114014.0A CN114356102A (en) 2022-01-30 2022-01-30 Three-dimensional object absolute attitude control method and device based on fingerprint image

Publications (1)

Publication Number Publication Date
CN114356102A true CN114356102A (en) 2022-04-15

Family

ID=81093518

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210114014.0A Pending CN114356102A (en) 2022-01-30 2022-01-30 Three-dimensional object absolute attitude control method and device based on fingerprint image

Country Status (1)

Country Link
CN (1) CN114356102A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05223547A (en) * 1992-02-13 1993-08-31 Nippon Telegr & Teleph Corp <Ntt> Controlling apparatus of posture of data of three-dimensional object
CN103631496A (en) * 2007-01-05 2014-03-12 苹果公司 Gestures for controlling, manipulating, and editing of media files using touch sensitive devices
CN109325444A (en) * 2018-09-19 2019-02-12 山东大学 A kind of texture-free three-dimension object Attitude Tracking method of monocular based on 3-D geometric model
CN110102044A (en) * 2019-03-15 2019-08-09 歌尔科技有限公司 Game control method, Intelligent bracelet and storage medium based on Intelligent bracelet
CN113570699A (en) * 2021-06-24 2021-10-29 清华大学 Method and device for reconstructing three-dimensional fingerprint
CN113569638A (en) * 2021-06-24 2021-10-29 清华大学 Method and device for estimating three-dimensional gesture of finger by planar fingerprint

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05223547A (en) * 1992-02-13 1993-08-31 Nippon Telegr & Teleph Corp <Ntt> Controlling apparatus of posture of data of three-dimensional object
CN103631496A (en) * 2007-01-05 2014-03-12 苹果公司 Gestures for controlling, manipulating, and editing of media files using touch sensitive devices
CN109325444A (en) * 2018-09-19 2019-02-12 山东大学 A kind of texture-free three-dimension object Attitude Tracking method of monocular based on 3-D geometric model
CN110102044A (en) * 2019-03-15 2019-08-09 歌尔科技有限公司 Game control method, Intelligent bracelet and storage medium based on Intelligent bracelet
CN113570699A (en) * 2021-06-24 2021-10-29 清华大学 Method and device for reconstructing three-dimensional fingerprint
CN113569638A (en) * 2021-06-24 2021-10-29 清华大学 Method and device for estimating three-dimensional gesture of finger by planar fingerprint

Similar Documents

Publication Publication Date Title
Sagayam et al. Hand posture and gesture recognition techniques for virtual reality applications: a survey
Hasan et al. RETRACTED ARTICLE: Static hand gesture recognition using neural networks
US20180186452A1 (en) Unmanned Aerial Vehicle Interactive Apparatus and Method Based on Deep Learning Posture Estimation
WO2023000119A1 (en) Gesture recognition method and apparatus, system, and vehicle
US20140139429A1 (en) System and method for computer vision based hand gesture identification
JP2014501011A5 (en)
CN110045829B (en) Apparatus and method for event using user interface
Jain et al. Deep neural learning techniques with long short-term memory for gesture recognition
CN111444764A (en) Gesture recognition method based on depth residual error network
CN106406518A (en) Gesture control device and gesture recognition method
CN111062263A (en) Method, device, computer device and storage medium for hand pose estimation
CN112750198A (en) Dense correspondence prediction method based on non-rigid point cloud
US10133470B2 (en) Interfacing device and method for providing user interface exploiting multi-modality
Wang et al. Immersive human–computer interactive virtual environment using large-scale display system
CN115712384A (en) Gesture control method, device and system based on attitude sensor
Xiao et al. A hand gesture-based interface for design review using leap motion controller
Huang et al. Conceptual three-dimensional modeling using intuitive gesture-based midair three-dimensional sketching technique
Wu et al. An overview of gesture recognition
O'Hagan et al. Visual gesture interfaces for virtual environments
Dai et al. Federated filter algorithm with positioning technique based on 3D sensor
CN114356102A (en) Three-dimensional object absolute attitude control method and device based on fingerprint image
Xing et al. Robust event detection based on spatio-temporal latent action unit using skeletal information
JPH08212327A (en) Gesture recognition device
CN114356103A (en) Three-dimensional pose increment control method and device based on fingerprint image
Shah et al. Gesture recognition technique: a review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination