CN113449652A - Positioning method and device based on biological feature recognition - Google Patents

Positioning method and device based on biological feature recognition Download PDF

Info

Publication number
CN113449652A
CN113449652A CN202110738183.7A CN202110738183A CN113449652A CN 113449652 A CN113449652 A CN 113449652A CN 202110738183 A CN202110738183 A CN 202110738183A CN 113449652 A CN113449652 A CN 113449652A
Authority
CN
China
Prior art keywords
finger
target user
image information
biological characteristics
biometric
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110738183.7A
Other languages
Chinese (zh)
Inventor
王江涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan ELF Education Software Co Ltd
Original Assignee
Dongguan ELF Education Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan ELF Education Software Co Ltd filed Critical Dongguan ELF Education Software Co Ltd
Priority to CN202110738183.7A priority Critical patent/CN113449652A/en
Publication of CN113449652A publication Critical patent/CN113449652A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

The embodiment of the application discloses a positioning method and device based on biological feature recognition. According to the technical scheme provided by the embodiment of the application, finger pictures at different angles are collected corresponding to the fingers of the target user in advance, and the finger biological characteristics of the target user are extracted from the finger pictures; when the finger identification and positioning are carried out, image information of a current main body to be identified is collected, the image information is compared with the biological characteristics of the finger, and whether the image information is matched with the biological characteristics of the finger is judged; and when the image information is judged to be matched with the finger biological characteristics, determining the body to be identified as the finger of the target user, and determining the finger indicating position of the target user according to the position of the body to be identified. By adopting the technical means, the finger biological characteristics of the target user are collected in advance, and finger identification and positioning are carried out based on finger biological characteristic matching, so that the finger identification and positioning precision and the finger identification efficiency can be improved, and the use experience of the auxiliary learning function is optimized.

Description

Positioning method and device based on biological feature recognition
Technical Field
The embodiment of the application relates to the technical field of intelligent terminals, in particular to a positioning method and device based on biological feature recognition.
Background
At present, along with the rapid development of intelligent terminal equipment, more and more intelligent terminal equipment (such as family education machines) can satisfy student's supplementary study demand. When the intelligent terminal device realizes the functions of reading, solving problems, translating and the like, the corresponding functions are executed based on the identification and positioning of the fingers of the user by detecting the positions of the fingers of the user.
When the intelligent terminal identifies and positions the finger. Generally, finger sample pictures of different people are collected in advance in batches, and model training is performed, so that the model achieves a universal effect for subsequent detection and identification of the fingers.
However, the mode of training the detection and recognition model in advance is adopted, so that the workload of acquiring and labeling the training materials is relatively large, and the sample coverage rate cannot be guaranteed. The finally trained model may have a great difference in detection and recognition effects for different people. For persons who have few or no collected samples contained in the training samples, a high recognition accuracy may not be guaranteed.
Disclosure of Invention
The embodiment of the application provides a positioning method and device based on biological feature recognition, which can improve finger recognition positioning precision and finger recognition efficiency and solve the problem of finger recognition errors.
In a first aspect, an embodiment of the present application provides a positioning method based on biometric identification, including:
acquiring finger pictures of different angles corresponding to fingers of a target user in advance, and extracting finger biological characteristics of the target user from the finger pictures;
when finger identification and positioning are carried out, image information of a current main body to be identified is collected, the image information is compared with the finger biological characteristics, and whether the image information is matched with the finger biological characteristics is judged;
and when the image information is judged to be matched with the finger biological characteristics, determining that the subject to be recognized is the finger of the target user, and determining the finger indicating position of the target user according to the position of the subject to be recognized.
Further, the finger biometric features include one or more of finger color, finger print, finger joint features, crescent shape, nail shape, and hand shape features.
Further, extracting the finger biometric features of the target user from the finger picture comprises:
inputting the finger picture into a pre-trained finger biological characteristic detection model;
and extracting the finger biological characteristics of the target user based on the finger biological characteristic detection model, wherein the finger biological characteristic detection model is trained and constructed in advance according to finger biological characteristic sample pictures of corresponding types.
Further, when the finger biometric features include a plurality of types, the determining whether the image information matches the finger biometric features includes:
calculating first similarity between the image information and the finger biological characteristics of each corresponding type;
and obtaining average similarity according to the plurality of first similarities, judging whether the average similarity reaches a set first similarity threshold, if so, judging that the image information is matched with the finger biological characteristics, and if not, judging that the image information is not matched with the finger biological characteristics.
Further, when the finger biometric features include a plurality of types, the determining whether the image information matches the finger biometric features includes:
calculating second similarity of the image information and the finger biological characteristics of each corresponding type;
counting the proportion information of each second similarity reaching a set second similarity threshold;
and judging whether the proportion information reaches a set proportion threshold value, if so, judging that the image information is matched with the finger biological characteristics, and if not, judging that the image information is not matched with the finger biological characteristics.
Further, the acquiring finger pictures at different angles corresponding to the fingers of the target user in advance includes:
and when the finger picture is collected, outputting a voice prompt to guide the target user to provide finger gestures at different angles, and collecting the finger picture corresponding to the finger of the target user in real time.
Further, determining the finger indication position of the target user according to the position of the subject to be recognized includes:
identifying the fingertip characteristic of the body to be identified, determining a fingertip coordinate of the fingertip characteristic in an indication image, taking the fingertip coordinate as a finger indication position of the target user, and acquiring the indication image according to the content corresponding to the indication learning page of the body to be identified.
In a second aspect, an embodiment of the present application provides a positioning apparatus based on biometric identification, including:
the acquisition module is used for acquiring finger pictures of different angles corresponding to fingers of a target user in advance and extracting finger biological characteristics of the target user from the finger pictures;
the matching module is used for acquiring image information of a current main body to be identified when finger identification and positioning are carried out, comparing the image information with the finger biological characteristics and judging whether the image information is matched with the finger biological characteristics or not;
and the positioning module is used for determining the to-be-recognized main body as the finger of the target user when the image information is judged to be matched with the finger biological characteristics, and determining the finger indication position of the target user according to the position of the to-be-recognized main body.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a memory and one or more processors;
the memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a biometric-based positioning method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a storage medium containing computer-executable instructions for performing the biometric-based localization method according to the first aspect when executed by a computer processor.
According to the method and the device, finger pictures at different angles are collected by corresponding fingers of a target user in advance, and the finger biological characteristics of the target user are extracted from the finger pictures; when the finger identification and positioning are carried out, image information of a current main body to be identified is collected, the image information is compared with the biological characteristics of the finger, and whether the image information is matched with the biological characteristics of the finger is judged; and when the image information is judged to be matched with the finger biological characteristics, determining the body to be identified as the finger of the target user, and determining the finger indicating position of the target user according to the position of the body to be identified. By adopting the technical means, the finger biological characteristics of the target user are collected in advance, and finger identification and positioning are carried out based on finger biological characteristic matching, so that the finger identification and positioning precision and the finger identification efficiency can be improved, and the use experience of the auxiliary learning function is optimized. Moreover, finger identification and positioning are carried out through finger biological feature matching of the target user, the specialization of the auxiliary learning function can be realized, and the condition of error identification is avoided.
Drawings
Fig. 1 is a flowchart of a positioning method based on biometric identification according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating finger biometric detection according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a finger pointing operation in accordance with one embodiment of the present application;
FIG. 4 is a flowchart illustrating a finger biometric comparison according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating another finger biometric comparison method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a positioning apparatus based on biometric identification according to a second embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to a third embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, specific embodiments of the present application will be described in detail with reference to the accompanying drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some but not all of the relevant portions of the present application are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
The application provides a positioning method based on biological feature recognition, which aims to carry out finger recognition and positioning through finger biological features by collecting the finger biological features of a target user so as to improve the finger recognition and positioning efficiency and recognition accuracy. Compared with the traditional finger identification mode, when the finger identification is carried out, the finger needs to be detected and identified by using the detection and identification model trained in advance, in order to achieve certain identification precision, the detection and identification model needs to collect finger sample pictures of different people in batch in advance and carry out model training, and the whole process is relatively complicated. Moreover, the detection recognition model cannot guarantee high recognition accuracy for people not included in the training samples. Therefore, the positioning method based on the biological feature recognition is provided to solve the problem of recognition errors in the existing finger recognition positioning technology.
The first embodiment is as follows:
fig. 1 is a flowchart of a positioning method based on biometric identification according to an embodiment of the present invention, where the positioning method based on biometric identification provided in this embodiment may be performed by a positioning apparatus based on biometric identification, and the positioning apparatus based on biometric identification may be implemented by software and/or hardware, and the positioning apparatus based on biometric identification may be formed by two or more physical entities or may be formed by one physical entity. Generally, the positioning device based on the biometric identification can be a learning machine, a mobile phone, a tablet or a computer and other intelligent terminal devices.
The following description will be given taking the positioning apparatus based on biometric recognition as an example of a subject that performs the positioning method based on biometric recognition. Referring to fig. 1, the positioning method based on biometric identification specifically includes:
s110, acquiring finger pictures of different angles corresponding to fingers of a target user in advance, and extracting finger biological characteristics of the target user from the finger pictures.
Specifically, when the finger recognition is performed, the method of pre-collecting the finger biological characteristics of the target user is adopted for performing the subsequent finger recognition. The finger image of the target user is collected through the positioning equipment based on the biological feature recognition, and then the finger biological feature of the target user is detected based on the finger image.
Optionally, when the positioning device based on the biometric feature recognition collects the finger image, a voice prompt is output to guide the target user to provide finger gestures at different angles, and the finger image is collected in real time corresponding to the finger of the target user. According to the method and the device for the finger gesture recognition, according to different finger gestures of the user finger indication learning page in an actual auxiliary learning scene, finger image collection of the finger gestures is correspondingly carried out, so that accurate recognition of finger indication positions is achieved.
Taking a learning machine as an example, before using auxiliary learning functions of point reading, problem solving, translation and the like of the learning machine, a target user carries out man-machine interaction with the learning machine to call out a collection page of finger biological characteristics, and at the moment, the target user starts a camera of the learning machine and shoots a finger picture of the target user by using the camera. After the camera is started, the learning machine synchronously outputs voice prompt information through the loudspeaker to guide the current target user to shoot finger pictures in the corresponding finger gesture. Generally speaking, the learning machine may preset several pieces of voice prompt information to respectively guide the target user to take a finger picture with a corresponding finger gesture. And when the learning machine shoots one finger picture, the validity of the picture is detected, and whether the currently collected finger picture corresponding to the finger gesture is valid or not is judged. And for the determined invalid picture, prompting the target user to acquire the finger picture corresponding to the finger gesture again by voice. Therefore, the finger picture collection of the corresponding target user can be completed. It should be noted that, in the embodiment of the present application, for each target user who needs to use the auxiliary learning function, acquisition of a finger image needs to be performed in advance, so as to be used for subsequent finger identification of the target user. By means of the method, the device and the system, the finger pictures of the target user are collected in a targeted mode, the targeted identification and positioning of the fingers of the target user can be achieved, the finger identification precision is improved, the specificity of the auxiliary learning function is achieved, and the user experience is optimized. It can be understood that, as the subsequent finger recognition can be performed by acquiring the finger picture in advance, for the user who does not input the finger picture in advance, the finger recognition cannot be performed, and for the target user who inputs the finger picture in advance, the finger recognition can be performed directly according to the feature comparison matching mode, the whole process is relatively efficient, and the recognition accuracy is relatively high.
Specifically, the finger biometric feature of the target user is detected and identified from the finger picture based on the acquired finger picture of the target user, so as to be used for subsequent finger identification. Referring to fig. 2, extracting the finger biometric features of the target user from the finger picture includes:
s1101, inputting the finger picture into a pre-trained finger biological characteristic detection model;
and S1102, extracting the finger biological characteristics of the target user based on the finger biological characteristic detection model, wherein the finger biological characteristic detection model is trained and constructed in advance according to finger biological characteristic sample pictures of corresponding types.
In an embodiment of the present application, the finger biometric features include one or more types of finger color, finger texture, finger joint features, crescent white shape, nail shape, and hand shape features. It will be appreciated that the above-described finger biometric characteristics may vary from person to person. The more types of the finger biological characteristics are collected in advance, the higher the identification precision of the finger of the corresponding target user is, and the finger biological characteristics of each type of the target user are collected to be used for carrying out subsequent finger identification and positioning of the target user, so that more accurate finger identification can be realized.
When the finger biological characteristics are identified, the detection and identification of each type of finger biological characteristics are carried out through a pre-constructed finger biological characteristic detection model. The finger biological characteristic detection model is a target detection model based on a convolutional neural network, and is used for carrying out model training by pre-corresponding to sample images of various finger biological characteristics so as to detect various finger biological characteristics. The finger biological feature detection models of different types can be respectively subjected to model training, and then finger pictures are input into the finger biological feature detection models to realize detection of the corresponding finger biological features.
It should be noted that, because the corresponding target user acquires a plurality of finger images of different finger poses, when detecting the finger biometric features, a plurality of detection results representing the finger biometric features corresponding to each type of finger biometric features may be obtained, and the detection results correspond to the different finger poses. Moreover, the information formats of the finger biometrics characteristics are different. For example, the finger biometric features may be identified using corresponding color values corresponding to finger color, and may be identified using corresponding picture screenshots or contour maps for finger prints, finger joint features, crescent white shapes, nail shapes, and hand type features.
S120, when the finger is identified and positioned, acquiring image information of a current body to be identified, comparing the image information with the finger biological characteristics, and judging whether the image information is matched with the finger biological characteristics.
Illustratively, the finger reading recognition of the target user is described by taking a reading learning scene as an example. Referring to fig. 3, when the learning machine is used for reading, a book is placed in a shooting range of a front camera or an external camera module of the learning machine. The learning machine is vertically placed, the camera arranged at the top of one side of the screen of the learning machine faces the book placing position, the point reading images containing the book pages are shot in real time through the camera, and the point reading images are detected and identified by the main body to be identified. When recognizing that the user indicates the book page to perform click reading by using the corresponding finger with the pre-stored finger biological characteristics, the learning machine queries the corresponding feedback content according to the click reading content by determining the content of the book page currently indicated by the user as click reading content, and performs click reading analysis operations such as reading playing, problem solving and the like. Generally speaking, the learning machine sets up an original page database corresponding to each page of the book, where the original page database stores original page data corresponding to each page of the book, and the original page data corresponding to each page of the book may include multiple corresponding problem data, and there are corresponding feedback contents such as problem solution, pronunciation, audio, etc. corresponding to each problem data. When the click-to-read content (namely exercise data) indicated by the user is determined, the exercise data asked by the user at present can be determined, and the feedback content is correspondingly determined to be fed back to the user, so that the click-to-read learning operation of the learning machine is realized. Further, in order to determine the reading content of the user, it is necessary to first determine the book page currently indicated by the user, and define the book page as the reading image. And further determining the reading contents according to the reading coordinates on the book page indicated by the user, and inquiring to obtain corresponding exercise data based on the reading contents. Optionally, because the condition that the finger blocks the book page area exists in the point-reading image shot by the user during the point-reading process, and the interference caused by the blocking easily affects the query of the original page data, the marking of the book page area is performed in advance before the point-reading by the user in the embodiment of the application, so as to be used for subsequently querying the original page data.
Specifically, when the positioning device based on the biometric feature recognition performs finger recognition and positioning, the image information of the current subject to be recognized is intercepted from the click-read image by acquiring the click-read image of the current user. The embodiment of the application aims at finger biological characteristics to perform finger identification, so that when image information of a to-be-identified main body is intercepted, whether a finger indicates a book page exists in a point reading image can be detected through a finger detection model, only when the condition that the finger indicates the point reading exists on the current book page is determined, the point reading image acquisition is performed, and the image information of the current to-be-identified main body is intercepted. Therefore, the condition of equipment error identification can be reduced, and the finger identification detection precision is improved. It should be noted that, since the finger biometric features are collected in advance for the target user in the embodiment of the present application, the device may determine whether to perform feedback on the current touch-and-talk operation by determining whether the current subject to be recognized matches the pre-stored finger biometric features.
Further, after the image information of the current subject to be identified is collected, the image information is compared with the pre-stored finger biological characteristics, and whether the image information is matched with the finger of the target user is judged. Generally speaking, if the pre-stored finger biometric features are only one of finger color, finger texture, finger joint features, crescent white shape, nail shape and hand shape features, the image information is considered to be matched with the finger of the target user when the similarity satisfies a set similarity threshold value directly according to the similarity between the image information and the finger biometric features of the specified type. When the pre-stored finger biological characteristics include various types, the image information needs to be compared with each finger biological characteristic respectively. Referring to fig. 4, when the finger biometric features include a plurality of types, the determining whether the image information matches the finger biometric features includes:
s1201, calculating first similarity of the image information and the finger biological characteristics of each corresponding type;
s1202, obtaining an average similarity according to the plurality of first similarities, and judging whether the average similarity reaches a set first similarity threshold, if so, judging that the image information is matched with the finger biological characteristics, and if not, judging that the image information is not matched with the finger biological characteristics.
And comparing the finger biological characteristics with image information respectively corresponding to the various types of finger biological characteristics, and detecting and extracting corresponding finger characteristic data from the image information of the current subject to be identified by using a corresponding detection model according to the detection and extraction mode of the various types of finger biological characteristics during comparison. For example, corresponding to the finger color, the color value of the finger in the image information is detected, and the color value is compared with the finger biological characteristic for identifying the finger color, so as to calculate the first similarity between the two. Similarly, the finger biological characteristics corresponding to the types of finger lines, finger joint characteristics, crescent white shapes, nail shapes, hand shape characteristics and the like are compared, the finger characteristic data corresponding to the image information is detected through the detection model, the finger characteristic data and the finger biological characteristics corresponding to the types are compared, and the first similarity of the two finger biological characteristics is calculated.
Further, based on the first similarity determined by comparing the biological characteristics of each type of finger, the embodiment of the application represents the similarity between the current subject to be recognized and the target user finger by calculating the average similarity of the first similarities. It can be understood that when the average similarity reaches the set first similarity threshold, it is determined that the current image information matches the pre-stored finger biometric features, and the subject to be recognized is the finger of the target user.
Optionally, in the embodiment of the present application, when the finger biometric features include multiple types, another finger biometric feature comparison method is further provided. Referring to fig. 5, determining whether the image information matches the finger biometric characteristic includes:
s1203, calculating second similarity of the image information and the finger biological features of the corresponding types;
s1204, counting the proportion information of each second similarity reaching a set second similarity threshold;
and S1205, judging whether the proportion information reaches a set proportion threshold value, if so, judging that the image information is matched with the finger biological characteristics, and if not, judging that the image information is not matched with the finger biological characteristics.
Similarly, the detection model detects corresponding finger characteristic data in the image information, and compares the finger characteristic data with the corresponding type of finger biological characteristics to determine a corresponding second similarity. And comparing each second similarity with a preset second similarity threshold. Before that, the system presets a second similarity threshold value of each type of finger biological characteristics, and when the finger biological characteristic data reaches the second similarity threshold value, the finger biological characteristic data is considered to be matched with the corresponding finger biological characteristics.
Furthermore, the proportion of the second similarity to the set second similarity threshold can be determined by counting the number of the second similarities reaching the set second similarity threshold and determining the proportion of the number to the total amount of the second similarities. And if the proportion information reaches a set proportion threshold value, the body to be recognized is considered as the finger of the target user. For example, corresponding to five types of pre-stored finger biological characteristics such as finger lines, finger joint characteristics, crescent white shapes, nail shapes, hand shape characteristics and the like, respectively calculating second similarity between the image information and the five finger biological characteristics, and determining that the second similarity of the four types of finger biological characteristics of the image information such as the finger lines, the finger joint characteristics, the nail shapes and the hand shape characteristics reaches a second similarity threshold value, and determining that the corresponding proportion information is 80%. And comparing the proportion information with a set proportion threshold (such as 75%), and determining that the proportion information reaches the set proportion threshold, wherein the subject to be identified is the finger of the target user.
Based on the comparison and matching mode, whether the image information of the current main body to be recognized is matched with the pre-stored finger biological characteristics or not can be determined, and whether the main body to be recognized is the finger of the target user or not is determined. There are many embodiments for determining whether the image information is matched with the finger biometric characteristic, and this is not fixedly limited in the embodiments of the present application, and is not described herein again.
S130, when the image information is judged to be matched with the finger biological characteristics, the to-be-recognized subject is determined to be the finger of the target user, and the finger indicating position of the target user is determined according to the position of the to-be-recognized subject.
Further, after the image information is judged to be matched with the finger biological characteristics and the current body to be recognized is determined to be the finger of the target user, the corresponding position of the body to be recognized can be used as the finger indicating position of the target user, and the corresponding auxiliary learning function is realized based on the finger indicating position.
Illustratively, the click-to-read learning function is taken as an example. And after the current body to be recognized is determined to be the finger of the target user, responding to the current point-reading operation of the target user. Otherwise, after the image information is judged not to be matched with the finger biological characteristics and the current to-be-recognized main body is determined not to be the finger of the target user, the point reading function is not started, the point reading operation of the current to-be-recognized main body is invalid, the equipment outputs corresponding voice prompt information to prompt that the current point reading operation is invalid and the finger biological characteristics need to be input in advance. Therefore, the specificity of the click-to-read function can be realized, and the click-to-read function cannot be used for a user (such as a non-local user) who does not input the finger biological characteristics. For a target user with the finger biological characteristics input in advance, the method can realize rapid and efficient finger identification and optimize the use experience of the user.
Specifically, after the current body to be recognized is determined to be the finger of the target user, the indicating image of the book page is clicked and read by the current body to be recognized, and the indicating position of the body to be recognized is determined as the click-reading position based on the indicating image. On the basis of the indication image, determining fingertip coordinates of the fingertip features in the indication image by identifying the fingertip features of the to-be-identified main body, taking the fingertip coordinates as a click-to-read position of the to-be-identified main body, and acquiring the indication image according to corresponding contents of an indication learning page of the to-be-identified main body.
In the embodiment of the application, the fingertip characteristics of the body to be recognized can be a fingertip arc line, a finger nail and the like. Similarly, the positioning device based on biological feature recognition can detect the fingertip feature by presetting a fingertip feature detection model based on a convolutional neural network and utilizing the fingertip feature detection model. If the device identifies that the fingertip feature is a fingertip arc line, calculating to obtain a first midpoint of the fingertip arc line, acquiring a corresponding coordinate of the first midpoint in a coordinate system of the indication image, and determining the coordinate as a fingertip coordinate; if the device identifies the fingertip feature as a nail, the top edge of the nail can be analyzed through an image identification technology, a second midpoint of the top edge of the nail can be determined, and then the corresponding coordinate of the second midpoint in the coordinate system of the indication image can be obtained, and the coordinate is determined as the fingertip coordinate.
Further, the device determines a number of alternative delineation regions from the indication image corresponding to the fingertip coordinates. In the embodiment of the application, the positioning device based on the biological feature recognition can divide the characters in the learning page into a plurality of words in advance, determine the delineation area according to the plurality of words obtained by division, wherein the determined delineation area can contain any one word or sentence, and the position relationship between any two delineation areas can be an adjacent relationship, also can be a separated relationship, but cannot be an intersecting relationship. The device can determine a plurality of delineating areas which are in a close range with the fingertip coordinate distance as alternative delineating areas, so that the screening range of the target delineating area is reduced. And further acquiring point coordinates of the central point of each alternative delineation area in an indication image coordinate system, and further calculating the distance between the point coordinates of the central point of each alternative delineation area and the fingertip coordinates according to the Pythagorean theorem so as to enable the calculated distance to be more accurate. And finally, determining the candidate delineation area with the shortest distance as a target delineation area, namely the point-reading position of the main body to be identified. And determining the text content contained in the target delineation area as the point-to-read content, thereby completing the positioning based on the biological feature recognition in the embodiment of the application.
Optionally, when the positioning device based on the biometric feature recognition determines the candidate delineation area with the shortest distance as the target delineation area and determines the text content contained in the target delineation area as the click-to-read content, the candidate delineation area with the shortest distance may be determined as the target delineation area; the device determines the text content contained in the target delineation area as the point-reading content by judging whether the number of the target delineation area is unique or not and if so, determining the text content contained in the target delineation area as the point-reading content; if not, the indication direction of the finger tip of the user is identified and obtained from the image, and the target direction of the center point of each target delineation area pointed by the finger tip coordinate is determined. And calculating the included angle between each target direction and the indication direction, and determining the point reading content of the text content contained in the target delineation area corresponding to the target direction with the minimum included angle.
In practical applications, after the indication position of the subject to be recognized on the indication image is determined, the user responds to the indication operation of the target user according to the auxiliary learning function used by the current user. And determining the indication position of the main body to be identified in the indication image by referring to the finger positioning mode, and further determining the learning content corresponding to the indication position. And if the translation function is currently used, outputting the translation content corresponding to the learning content indicated by the target user. And if the problem solving function is used currently, outputting the problem solution corresponding to the learning content indicated by the current user. And responding to the instruction operation of the user through the identification and positioning of the fingers of the target user according to the use scene of the actual auxiliary learning function so as to realize the corresponding learning function.
Optionally, in an embodiment, a gesture feature of the current subject to be recognized is further determined through the image information based on the recognized finger of the target user, and a corresponding application function is executed according to the gesture feature. The positioning device based on biological feature recognition can pre-store gesture feature information for triggering various application functions, and when the gesture feature of the main body to be recognized corresponds to the pre-stored finger feature information, the corresponding application function is triggered. Therefore, triggering of the application function can be facilitated, user operation efficiency is improved, and user operation experience is optimized.
Acquiring finger pictures at different angles by corresponding fingers of a target user in advance, and extracting finger biological characteristics of the target user from the finger pictures; when the finger identification and positioning are carried out, image information of a current main body to be identified is collected, the image information is compared with the biological characteristics of the finger, and whether the image information is matched with the biological characteristics of the finger is judged; and when the image information is judged to be matched with the finger biological characteristics, determining the body to be identified as the finger of the target user, and determining the finger indicating position of the target user according to the position of the body to be identified. By adopting the technical means, the finger biological characteristics of the target user are collected in advance, and finger identification and positioning are carried out based on finger biological characteristic matching, so that the finger identification and positioning precision and the finger identification efficiency can be improved, and the use experience of the auxiliary learning function is optimized. Moreover, finger identification and positioning are carried out through finger biological feature matching of the target user, the specialization of the auxiliary learning function can be realized, and the condition of error identification is avoided.
Example two:
on the basis of the foregoing embodiment, fig. 6 is a schematic structural diagram of a positioning apparatus based on biometric identification according to a second embodiment of the present application. Referring to fig. 6, the positioning apparatus based on biometric identification provided in this embodiment specifically includes: an acquisition module 21, a matching module 22 and a positioning module 23.
The acquisition module 21 is configured to acquire finger pictures at different angles in advance corresponding to fingers of a target user, and extract finger biological features of the target user from the finger pictures;
the matching module 22 is configured to, when performing finger identification and positioning, acquire image information of a current subject to be identified, compare the image information with the finger biometric feature, and determine whether the image information is matched with the finger biometric feature;
the positioning module 23 is configured to determine that the subject to be recognized is the finger of the target user when it is determined that the image information matches the finger biometric characteristic, and determine a finger indication position of the target user according to the position of the subject to be recognized.
Specifically, the finger biological characteristics comprise one or more types of finger colors, finger lines, finger joint characteristics, crescent white shapes, nail shapes and hand type characteristics.
Specifically, the acquisition module 21 includes:
the input unit is used for inputting the finger picture into a pre-trained finger biological characteristic detection model;
and the detection unit is used for extracting the finger biological characteristics of the target user based on the finger biological characteristic detection model, and the finger biological characteristic detection model is trained and constructed in advance according to a finger biological characteristic sample picture of a corresponding type.
Specifically, the matching module 22 includes:
a first calculation unit configured to calculate first similarities between the image information and the finger biometrics of the respective corresponding types;
the first judging unit is used for obtaining an average similarity according to the plurality of first similarities, judging whether the average similarity reaches a set first similarity threshold value, if so, judging that the image information is matched with the finger biological characteristics, and if not, judging that the image information is not matched with the finger biological characteristics.
The matching module 22 further comprises:
a second calculation unit configured to calculate second similarities between the image information and the finger biometrics of the respective corresponding types;
the statistical unit is used for counting the proportion information of each second similarity reaching a set second similarity threshold;
the second judging unit judges whether the proportion information reaches a set proportion threshold value, if so, the image information is judged to be matched with the finger biological characteristics, and if not, the image information is judged to be not matched with the finger biological characteristics.
Specifically, the acquisition module 21 includes:
and the prompting unit is used for outputting voice prompts to guide the target user to provide finger gestures at different angles when the finger pictures are collected, and collecting the finger pictures corresponding to the fingers of the target user in real time.
Specifically, the positioning module 23 includes:
and the identification unit is used for identifying the fingertip characteristic of the body to be identified, determining the fingertip coordinate of the fingertip characteristic in an indication image, taking the fingertip coordinate as the finger indication position of the target user, and acquiring the indication image according to the content corresponding to the indication learning page of the body to be identified.
Acquiring finger pictures at different angles by corresponding fingers of a target user in advance, and extracting finger biological characteristics of the target user from the finger pictures; when the finger identification and positioning are carried out, image information of a current main body to be identified is collected, the image information is compared with the biological characteristics of the finger, and whether the image information is matched with the biological characteristics of the finger is judged; and when the image information is judged to be matched with the finger biological characteristics, determining the body to be identified as the finger of the target user, and determining the finger indicating position of the target user according to the position of the body to be identified. By adopting the technical means, the finger biological characteristics of the target user are collected in advance, and finger identification and positioning are carried out based on finger biological characteristic matching, so that the finger identification and positioning precision and the finger identification efficiency can be improved, and the use experience of the auxiliary learning function is optimized. Moreover, finger identification and positioning are carried out through finger biological feature matching of the target user, the specialization of the auxiliary learning function can be realized, and the condition of error identification is avoided.
The positioning device based on biometric identification provided by the second embodiment of the present application can be used for executing the positioning method based on biometric identification provided by the first embodiment of the present application, and has corresponding functions and beneficial effects.
Example three:
an embodiment of the present application provides an electronic device, and with reference to fig. 7, the electronic device includes: a processor 31, a memory 32, a communication module 33, an input device 34, and an output device 35. The number of processors in the electronic device may be one or more, and the number of memories in the electronic device may be one or more. The processor, memory, communication module, input device, and output device of the electronic device may be connected by a bus or other means.
The memory 32 is a computer readable storage medium, and can be used for storing software programs, computer executable programs, and modules, such as program instructions/modules corresponding to the positioning method based on biometric identification according to any embodiment of the present application (for example, the acquisition module, the matching module, and the positioning module in the positioning device based on biometric identification). The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system and an application program required by at least one function; the storage data area may store data created according to use of the device, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The communication module 33 is used for data transmission.
The processor 31 executes various functional applications of the device and data processing by executing software programs, instructions and modules stored in the memory, so as to realize the above-mentioned positioning method based on the biometric identification.
The input device 34 may be used to receive entered numeric or character information and to generate key signal inputs relating to user settings and function controls of the apparatus. The output device 35 may include a display device such as a display screen.
The electronic device provided above can be used to execute the positioning method based on biometric identification provided in the first embodiment above, and has corresponding functions and advantages.
Example four:
embodiments of the present application also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a biometric-based localization method, the biometric-based localization method including: acquiring finger pictures of different angles corresponding to fingers of a target user in advance, and extracting finger biological characteristics of the target user from the finger pictures; when finger identification and positioning are carried out, image information of a current main body to be identified is collected, the image information is compared with the finger biological characteristics, and whether the image information is matched with the finger biological characteristics is judged; and when the image information is judged to be matched with the finger biological characteristics, determining that the subject to be recognized is the finger of the target user, and determining the finger indicating position of the target user according to the position of the subject to be recognized.
Storage medium-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk, or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Lanbas (Rambus) RAM, etc.; non-volatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a different second computer system connected to the first computer system through a network (such as the internet). The second computer system may provide program instructions to the first computer for execution. The term "storage medium" may include two or more storage media residing in different locations, e.g., in different computer systems connected by a network. The storage medium may store program instructions (e.g., embodied as a computer program) that are executable by one or more processors.
Of course, the storage medium provided in the embodiments of the present application contains computer-executable instructions, and the computer-executable instructions are not limited to the positioning method based on biometric identification described above, and may also perform related operations in the positioning method based on biometric identification provided in any embodiments of the present application.
The positioning apparatus, the storage medium, and the electronic device based on biometric identification provided in the above embodiments may perform the positioning method based on biometric identification provided in any embodiments of the present application, and the technical details not described in the above embodiments may be referred to the positioning method based on biometric identification provided in any embodiments of the present application.
The foregoing is considered as illustrative of the preferred embodiments of the invention and the technical principles employed. The present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the claims.

Claims (10)

1. A positioning method based on biological feature recognition is characterized by comprising the following steps:
acquiring finger pictures of different angles corresponding to fingers of a target user in advance, and extracting finger biological characteristics of the target user from the finger pictures;
when finger identification and positioning are carried out, image information of a current main body to be identified is collected, the image information is compared with the finger biological characteristics, and whether the image information is matched with the finger biological characteristics is judged;
and when the image information is judged to be matched with the finger biological characteristics, determining that the subject to be recognized is the finger of the target user, and determining the finger indicating position of the target user according to the position of the subject to be recognized.
2. The method according to claim 1, wherein the finger biometric features comprise one or more types of finger color, finger texture, finger joint features, crescent shape, nail shape, and hand shape features.
3. The biometric-based positioning method according to claim 2, wherein extracting the finger biometric features of the target user from the finger picture comprises:
inputting the finger picture into a pre-trained finger biological characteristic detection model;
and extracting the finger biological characteristics of the target user based on the finger biological characteristic detection model, wherein the finger biological characteristic detection model is trained and constructed in advance according to finger biological characteristic sample pictures of corresponding types.
4. The biometric-based positioning method according to claim 2, wherein when the finger biometric includes a plurality of types, the determining whether the image information matches the finger biometric includes:
calculating first similarity between the image information and the finger biological characteristics of each corresponding type;
and obtaining average similarity according to the plurality of first similarities, judging whether the average similarity reaches a set first similarity threshold, if so, judging that the image information is matched with the finger biological characteristics, and if not, judging that the image information is not matched with the finger biological characteristics.
5. The biometric-based positioning method according to claim 2, wherein when the finger biometric includes a plurality of types, the determining whether the image information matches the finger biometric includes:
calculating second similarity of the image information and the finger biological characteristics of each corresponding type;
counting the proportion information of each second similarity reaching a set second similarity threshold;
and judging whether the proportion information reaches a set proportion threshold value, if so, judging that the image information is matched with the finger biological characteristics, and if not, judging that the image information is not matched with the finger biological characteristics.
6. The positioning method based on biometric feature recognition according to claim 1, wherein the pre-capturing finger pictures corresponding to the fingers of the target user at different angles comprises:
and when the finger picture is collected, outputting a voice prompt to guide the target user to provide finger gestures at different angles, and collecting the finger picture corresponding to the finger of the target user in real time.
7. The positioning method based on biometric identification according to claim 1, wherein determining the finger indication position of the target user according to the position of the subject to be identified comprises:
identifying the fingertip characteristic of the body to be identified, determining a fingertip coordinate of the fingertip characteristic in an indication image, taking the fingertip coordinate as a finger indication position of the target user, and acquiring the indication image according to the content corresponding to the indication learning page of the body to be identified.
8. A positioning device based on biometric identification, comprising:
the acquisition module is used for acquiring finger pictures of different angles corresponding to fingers of a target user in advance and extracting finger biological characteristics of the target user from the finger pictures;
the matching module is used for acquiring image information of a current main body to be identified when finger identification and positioning are carried out, comparing the image information with the finger biological characteristics and judging whether the image information is matched with the finger biological characteristics or not;
and the positioning module is used for determining the to-be-recognized main body as the finger of the target user when the image information is judged to be matched with the finger biological characteristics, and determining the finger indication position of the target user according to the position of the to-be-recognized main body.
9. An electronic device, comprising:
a memory and one or more processors;
the memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the biometric-based positioning method as recited in any one of claims 1-7.
10. A storage medium containing computer-executable instructions for performing the biometric based localization method according to any one of claims 1-7 when executed by a computer processor.
CN202110738183.7A 2021-06-30 2021-06-30 Positioning method and device based on biological feature recognition Pending CN113449652A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110738183.7A CN113449652A (en) 2021-06-30 2021-06-30 Positioning method and device based on biological feature recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110738183.7A CN113449652A (en) 2021-06-30 2021-06-30 Positioning method and device based on biological feature recognition

Publications (1)

Publication Number Publication Date
CN113449652A true CN113449652A (en) 2021-09-28

Family

ID=77814497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110738183.7A Pending CN113449652A (en) 2021-06-30 2021-06-30 Positioning method and device based on biological feature recognition

Country Status (1)

Country Link
CN (1) CN113449652A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807328A (en) * 2021-11-18 2021-12-17 济南和普威视光电技术有限公司 Target detection method, device and medium based on algorithm fusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807328A (en) * 2021-11-18 2021-12-17 济南和普威视光电技术有限公司 Target detection method, device and medium based on algorithm fusion
CN113807328B (en) * 2021-11-18 2022-03-18 济南和普威视光电技术有限公司 Target detection method, device and medium based on algorithm fusion

Similar Documents

Publication Publication Date Title
CN110232311B (en) Method and device for segmenting hand image and computer equipment
JP2021524951A (en) Methods, devices, devices and computer readable storage media for identifying aerial handwriting
CN111078083A (en) Method for determining click-to-read content and electronic equipment
TW200823773A (en) A method and apparatus for recognition of handwritten symbols
CN111353501A (en) Book point-reading method and system based on deep learning
CN112052186A (en) Target detection method, device, equipment and storage medium
CN109947273B (en) Point reading positioning method and device
CN112749646A (en) Interactive point-reading system based on gesture recognition
CN113516113A (en) Image content identification method, device, equipment and storage medium
CN110941992B (en) Smile expression detection method and device, computer equipment and storage medium
CN113449652A (en) Positioning method and device based on biological feature recognition
CN110858291A (en) Character segmentation method and device
CN115131693A (en) Text content identification method and device, computer equipment and storage medium
CN111077993B (en) Learning scene switching method, electronic equipment and storage medium
CN111027353A (en) Search content extraction method and electronic equipment
WO2023272604A1 (en) Positioning method and apparatus based on biometric recognition
CN111711758B (en) Multi-pointing test question shooting method and device, electronic equipment and storage medium
CN111432131B (en) Photographing frame selection method and device, electronic equipment and storage medium
CN111563497B (en) Frame question method and device based on moving track, electronic equipment and storage medium
CN113220125A (en) Finger interaction method and device, electronic equipment and computer storage medium
CN110543238A (en) Desktop interaction method based on artificial intelligence
CN111652182B (en) Method and device for identifying suspension gesture, electronic equipment and storage medium
CN111553365B (en) Question selection method and device, electronic equipment and storage medium
AU2021101278A4 (en) System and Method for Automatic Language Detection for Handwritten Text
TWI710969B (en) System and method for language learning using statistical analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination