WO2023272604A1 - 基于生物特征识别的定位方法及装置 - Google Patents

基于生物特征识别的定位方法及装置 Download PDF

Info

Publication number
WO2023272604A1
WO2023272604A1 PCT/CN2021/103682 CN2021103682W WO2023272604A1 WO 2023272604 A1 WO2023272604 A1 WO 2023272604A1 CN 2021103682 W CN2021103682 W CN 2021103682W WO 2023272604 A1 WO2023272604 A1 WO 2023272604A1
Authority
WO
WIPO (PCT)
Prior art keywords
finger
target user
image information
subject
biometric
Prior art date
Application number
PCT/CN2021/103682
Other languages
English (en)
French (fr)
Inventor
王江涛
Original Assignee
东莞市小精灵教育软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东莞市小精灵教育软件有限公司 filed Critical 东莞市小精灵教育软件有限公司
Priority to PCT/CN2021/103682 priority Critical patent/WO2023272604A1/zh
Publication of WO2023272604A1 publication Critical patent/WO2023272604A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm

Definitions

  • the embodiments of the present application relate to the technical field of smart terminals, and in particular to a positioning method and device based on biometric identification.
  • smart terminal devices such as tutoring machines
  • the smart terminal device realizes its functions such as point reading, problem solving, and translation, it detects the position of the user's finger and performs corresponding functions based on the identification and positioning of the user's finger.
  • the smart terminal recognizes and locates the finger.
  • the finger sample pictures of different people are collected in batches in advance, and the model is trained to make the model achieve a universal effect, which can be used for subsequent detection and recognition of fingers.
  • the workload of collecting and labeling training materials is relatively large, and the sample coverage rate cannot be guaranteed.
  • the final trained model may have different detection and recognition effects for different people. For those who collect few samples or are not included in the training samples, high recognition accuracy may not be guaranteed.
  • Embodiments of the present application provide a positioning method and device based on biometric feature recognition, which can improve finger recognition positioning accuracy and finger recognition efficiency, and solve the problem of finger recognition errors.
  • the embodiment of the present application provides a positioning method based on biometric identification, including:
  • the subject to be recognized is the finger of the target user, and the finger pointing position of the target user is determined according to the position of the subject to be recognized.
  • the biological characteristics of the finger include one or more types of finger color, fingerprint pattern, finger joint characteristics, crescent moon shape, nail shape and hand shape characteristics.
  • extracting the finger biometrics of the target user from the finger picture includes:
  • the finger biometrics of the target user are extracted based on the finger biometrics detection model, and the finger biometrics detection model is pre-trained and constructed according to corresponding types of finger biometrics sample pictures.
  • the judging whether the image information matches the finger biological features includes:
  • the judging whether the image information matches the finger biological features includes:
  • the pre-acquisition of finger pictures corresponding to the fingers of the target user at different angles includes:
  • determining the finger pointing position of the target user according to the position of the subject to be identified includes:
  • the identification subject instructs the corresponding content of the learning page to be collected.
  • the embodiment of the present application provides a positioning device based on biometric identification, including:
  • the collection module is used to collect finger pictures corresponding to the fingers of the target user in different angles in advance, and extract the finger biological characteristics of the target user from the finger pictures;
  • the matching module it is used to collect the image information of the current subject to be identified when performing finger recognition and positioning, compare the image information with the biological characteristics of the finger, and determine whether the image information matches the biological characteristics of the finger;
  • a positioning module configured to determine that the subject to be identified is the finger of the target user when determining that the image information matches the biological characteristics of the finger, and determine the finger indication of the target user according to the position of the subject to be identified Location.
  • an electronic device including:
  • the memory is used to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors are made to implement the positioning method based on biometric identification as described in the first aspect.
  • the embodiment of the present application provides a storage medium containing computer-executable instructions, and when the computer-executable instructions are executed by a computer processor, the biometric identification-based positioning method.
  • finger pictures of different angles are collected corresponding to the fingers of the target user in advance, and the finger biometric features of the target user are extracted from the finger pictures; Biometric comparison, to determine whether the image information matches the finger biometrics; when determining that the image information matches the finger biometrics, determine that the subject to be recognized is the finger of the target user, and determine the finger pointing position of the target user according to the position of the subject to be recognized.
  • Biometric comparison to determine whether the image information matches the finger biometrics
  • the subject to be recognized is the finger of the target user, and determine the finger pointing position of the target user according to the position of the subject to be recognized.
  • FIG. 1 is a flow chart of a positioning method based on biometric identification provided in Embodiment 1 of the present application;
  • FIG. 2 is a flow chart of finger biometric detection in Embodiment 1 of the present application.
  • Fig. 3 is a schematic diagram of finger pointing operation in Embodiment 1 of the present application.
  • Fig. 4 is a flow chart of finger biometric comparison in Embodiment 1 of the present application.
  • FIG. 5 is a flow chart of another finger biometric comparison method in Embodiment 1 of the present application.
  • FIG. 6 is a schematic structural diagram of a positioning device based on biometric identification provided in Embodiment 2 of the present application;
  • FIG. 7 is a schematic structural diagram of an electronic device provided in Embodiment 3 of the present application.
  • the biometric identification-based positioning method provided in this application aims to improve the efficiency and accuracy of finger identification and positioning by collecting the target user's finger biometrics and performing finger identification and positioning through the finger biometrics.
  • a pre-trained detection and recognition model to detect and recognize fingers.
  • the detection and recognition model needs to collect finger sample pictures of different people in batches in advance. And model training, the whole process is relatively complicated.
  • the detection and recognition model cannot guarantee high recognition accuracy.
  • a positioning method based on biometric identification according to an embodiment of the present application is provided to solve the recognition error problem existing in the existing finger recognition positioning technology.
  • FIG 1 shows a flow chart of a positioning method based on biometric identification provided in Embodiment 1 of the present application.
  • the positioning method based on biometric identification provided in this embodiment can be executed by a positioning device based on biometric identification.
  • the positioning device based on biometric identification can be realized by means of software and/or hardware, and the positioning device based on biometric identification can be composed of two or more physical entities, or can be composed of one physical entity.
  • the positioning device based on biometric identification can be a smart terminal device such as a learning machine, a mobile phone, a tablet or a computer.
  • the positioning method based on biometric identification specifically includes:
  • S110 Preliminarily collect finger pictures corresponding to the fingers of the target user at different angles, and extract the finger biometric features of the target user from the finger pictures.
  • a method of pre-collecting the target user's finger biometrics is used for subsequent finger recognition.
  • the finger picture of the target user is collected by the positioning device based on biometric feature recognition, and then the finger biometric feature of the target user is detected based on the finger picture.
  • the biometric identification-based positioning device when the biometric identification-based positioning device collects finger pictures, it outputs voice prompts to guide the target user to provide finger gestures at different angles, and collects finger pictures corresponding to the target user's fingers in real time.
  • the user's finger points to different finger gestures on the learning page, and the embodiment of the present application correspondingly collects finger pictures of these finger gestures to realize accurate recognition of the finger pointing position.
  • the collection page of finger biometrics is called up.
  • the camera of the learning machine uses the camera to take pictures of the finger of the target user. After the camera is turned on, the learning machine synchronously outputs voice prompt information through the speaker to guide the current target user to take finger pictures with the corresponding finger gestures.
  • the learning opportunity pre-sets several pieces of voice prompt information to respectively guide the target user to take finger pictures with corresponding finger gestures.
  • each time the learning machine takes a picture of a finger it will check the validity of the picture to determine whether the currently collected finger picture corresponding to the gesture of the finger is valid. For pictures determined to be invalid, it is necessary to voice prompt the target user to re-collect the finger picture corresponding to the finger gesture. In this way, the finger picture collection corresponding to the target user can be completed.
  • corresponding to each target user who needs to use the auxiliary learning function it is necessary to collect finger pictures in advance for subsequent finger recognition of the target user. Through targeted collection of finger pictures of target users, targeted identification and positioning of target users' fingers can be realized, the accuracy of finger identification can be improved, and the specificity of auxiliary learning functions can be realized to optimize user experience.
  • Finger recognition can be performed directly based on feature comparison and matching. The whole process is relatively efficient and its recognition accuracy is relatively high.
  • extract the finger biometric feature of described target user from described finger picture comprise:
  • the finger biometrics include one or more types of finger color, fingerprint path, finger joint features, crescent moon shape, nail shape, and hand shape features. It can be understood that, for the fingers of different people, the biological characteristics of the above-mentioned types of fingers will be different. The more types of finger biometrics are collected in advance, the higher the recognition accuracy of the corresponding target user's finger. By collecting various types of finger biometrics of the target user, they can be used for subsequent finger recognition and positioning of the target user, so as to achieve more Accurate finger recognition.
  • the finger biometric detection model is a target detection model based on a convolutional neural network, which performs model training on sample images corresponding to various types of finger biometrics in advance to detect various types of finger biometrics.
  • model training can be carried out separately, and then finger pictures are input into each finger biometric detection model, so as to realize the detection of corresponding finger biometrics.
  • the finger biometrics can be marked with the corresponding color value, and the fingerprint path, finger joint features, crescent shape, nail shape, and hand shape features can be marked with the corresponding screenshot or contour map.
  • S120 When performing finger identification and positioning, collect image information of the current subject to be identified, compare the image information with the finger biometrics, and determine whether the image information matches the finger biometrics.
  • the point-to-read learning scene is taken as an example to describe the point-to-read recognition of the target user.
  • the learning machine when using the learning machine for point-to-read learning, put the book within the shooting range of the front camera of the learning machine or the external camera module.
  • the learning machine is placed vertically, and the camera set on the top of one side of the corresponding learning machine screen is facing the place where the book is placed.
  • the point-to-read image including the book page is captured in real time through the camera, and the subject to be recognized is detected and identified on the point-to-read image.
  • the learning machine determines the content of the book page currently indicated by the user as the point reading content, queries the corresponding feedback content according to the point reading content, and pronounces it Point-to-read analysis operations such as playback and exercise answers.
  • the learning opportunity corresponds to setting the original page database, which stores the original page data corresponding to each page of the book, and the original page data corresponding to each book page can contain multiple corresponding exercise data, and corresponding to each exercise data exists Corresponding homework answers, pronunciation audio and other feedback content.
  • the exercise data currently asked by the user can be determined, and the corresponding feedback content can be determined to feed back to the user, so as to realize the point-to-read learning operation of the learning machine.
  • the content to be read by the user it is first necessary to determine the page of the book currently indicated by the user, and define it as an image to be read. Further, the point reading content can be determined according to the point reading coordinates on the page of the book indicated by the user, and the corresponding exercise data can be obtained based on the point reading content query.
  • the point-to-read image taken by the user during point-reading has the situation that the finger covers the page area of the book, the interference caused by the occlusion will easily affect the query of the original page data. Therefore, in the embodiment of the present application, before the user performs point-reading, The page area of the book is marked in advance for subsequent query of the original page data.
  • the biometric identification-based positioning device when the biometric identification-based positioning device performs finger recognition and positioning, it acquires the point-and-click image of the current user, and intercepts the image information of the current subject to be identified from the point-and-read image. Since the embodiment of the present application is for finger recognition based on finger biological characteristics, when intercepting the image information of the subject to be recognized, a finger detection model can be used to detect whether there is a finger pointing to the book page in the reading image, and only when the current book page is determined Only when there is a point reading with a finger, the point reading image is collected, and the image information of the current subject to be identified is intercepted. In this way, the misidentification of the device can be reduced, and the accuracy of finger recognition and detection can be improved. It should be noted that since the embodiment of the present application collects finger biometrics for the target user in advance, the device can determine whether to give feedback to the current point reading operation by determining whether the current subject to be identified matches the pre-stored finger biometrics.
  • the embodiment of the present application compares the image information with the pre-stored finger biometrics to determine whether the image information matches the target user's finger.
  • the pre-stored finger biometric features are only one of finger color, fingerprint path, finger joint features, crescent shape, nail shape, and hand shape features, then directly compare the image information with the specified type of finger.
  • the similarity of biometric features when the similarity meets the set similarity threshold, it is considered that the image information matches the finger of the target user.
  • the pre-stored finger biometric features include multiple types, it is necessary to compare the image information with each finger biometric feature.
  • the judging whether the image information matches the finger biometric features includes:
  • comparing refers to the above-mentioned detection and extraction methods of various types of finger biometrics, use the corresponding detection model, from the image of the current subject to be identified Detect and extract the corresponding finger feature data in the information. For example, corresponding to the finger color, the first similarity between the two is calculated by detecting the color value of the finger in the image information and comparing it with the finger biometric feature identifying the finger color.
  • the detection model is used to correspond to finger feature data from the detection image information, and these finger feature data are compared with the above-mentioned Corresponding types of finger biometrics are compared, and the first similarity between the two is calculated.
  • the embodiment of the present application calculates the average similarity of these first similarities, and uses the average similarity to represent the similarity between the subject to be identified and the finger of the target user. . It can be understood that when the average similarity reaches the set first similarity threshold, it is determined that the current image information matches the pre-stored finger biometrics, and the subject to be identified is the target user's finger.
  • judging whether the image information matches the biological characteristics of the finger includes:
  • S1205. Determine whether the ratio information reaches a set ratio threshold, if yes, determine that the image information matches the finger biometric feature, and if not, determine that the image information does not match the finger biometric feature.
  • the detection model to detect the corresponding finger feature data in the image information, and comparing it with the corresponding type of finger biological feature to determine the corresponding second similarity.
  • Each second similarity is compared with a preset second similarity threshold. Prior to this, the system presets the second similarity threshold for each type of finger biometrics, and when the finger biometric data reaches the second similarity threshold, the finger biometric data is considered to match the corresponding finger biometrics.
  • the second similarity reaching the set second similarity threshold it can be determined that the second similarity reaches the set second similarity Threshold ratio. If the proportion information reaches the set proportion threshold, it is considered that the subject to be recognized is the target user's finger.
  • corresponding to five types of pre-stored finger biometric features such as fingerprint path, finger joint feature, crescent shape, nail shape and hand shape feature
  • corresponding to five types of pre-stored finger biometric features such as fingerprint path, finger joint feature, crescent shape, nail shape and hand shape feature
  • the second similarity of the four types of finger biological features of the image information, fingerprint path, finger joint feature, nail shape and hand shape feature reaches the second similarity threshold, then the corresponding ratio information is determined to be 80%.
  • This proportion information is compared with a set proportion threshold (such as 75%), and it is determined that the proportion information reaches the set proportion threshold, and the subject to be identified is the target user's finger.
  • the corresponding position of the subject to be identified can be used as the pointing position of the target user's finger, based on the pointing position of the finger. Corresponding auxiliary learning function.
  • the point-to-read learning function As an example. After determining that the current subject to be recognized is the target user's finger, a response is made corresponding to the target user's current point-and-read operation. Conversely, after it is determined that the image information does not match the biological characteristics of the finger and it is determined that the subject to be identified is not the finger of the target user, the point-to-read function is not enabled, the point-to-read operation of the subject to be identified is invalid, and the device outputs the corresponding Voice prompt information to prompt that the current point reading operation is invalid, and the finger biometrics need to be entered in advance. In this way, the specificity of the point-to-read function can be realized. For users who have not entered the finger biometrics (such as non-local users), the point-to-read function cannot be used. For target users whose finger biometrics are pre-registered, fast and efficient finger recognition can be achieved, optimizing the user experience.
  • the indicated position of the subject to be identified is determined as the reading position based on the indication image.
  • the fingertip coordinates of the fingertip features in the indication image are determined, and the fingertip coordinates are used as the reading position of the subject to be identified
  • the instruction image is collected according to the content corresponding to the instruction learning page of the subject to be identified.
  • the fingertip features of the subject to be identified may be fingertip arcs, nails, and the like.
  • the biometric identification-based positioning device presets a convolutional neural network-based fingertip feature detection model, and uses the fingertip feature detection model to detect fingertip features.
  • the first midpoint of the fingertip arc can be calculated, and the coordinates corresponding to the first midpoint in the coordinate system of the indicated image can be obtained, and the The coordinates are determined as fingertip coordinates; if the device recognizes that the fingertip is characterized as a nail, the top edge of the nail can be obtained through image recognition technology analysis, and the second midpoint of the top edge of the nail can be determined, and then the second midpoint can be obtained.
  • the coordinates corresponding to the midpoint in the coordinate system of the indicated image are determined as the coordinates of the fingertip.
  • the device determines several candidate outline regions corresponding to the coordinates of the fingertip from the indication image.
  • the positioning device based on biometric identification can pre-segment the characters in the learning page into several words, and determine the outline area according to the several words obtained by segmentation, and the determined outline area can contain any word or sentences, and the positional relationship between any two outlined areas can be adjacent or separated, but not intersecting.
  • the device may determine several outlining areas that are within a relatively short distance from the coordinates of the fingertip as candidate outlining areas, thereby narrowing the screening range of the target outlining area.
  • the candidate outline area with the shortest distance is determined as the target outline area, that is, the reading position of the subject to be recognized.
  • the text content included in the target outline area is determined as the point-to-read content, so as to complete the positioning based on the biometric feature recognition in the embodiment of the present application.
  • the biometric identification-based positioning device determines the candidate outlining area with the shortest distance as the target outlining area, and determines the text content contained in the target outlining area as the point-to-read content, it can select the candidate outlining area with the shortest distance
  • the selected outline area is determined as the target outline area; the device judges whether the number of target outline areas is unique, and if so, determines the text content contained in the target outline area as point-to-read content; if not, recognizes the user's fingertip from the image , determine the target direction that the fingertip coordinates point to the center point of each target outline area.
  • the included angle between each target direction and the indicated direction is calculated, and the text content contained in the target outline area corresponding to the target direction with the smallest included angle is determined to be read.
  • the pointing position of the subject to be recognized on the pointing image is determined by referring to the above-mentioned finger positioning method, and then the learning content corresponding to the pointing position is determined. If the translation function is currently used, the translation content corresponding to the learning content indicated by the target user is output. If the problem-solving function is currently used, the solution to the exercises corresponding to the learning content indicated by the current user is output. According to the actual use scenario of the auxiliary learning function, through the identification and positioning of the target user's finger, respond to the user's instruction operation, so as to realize the corresponding learning function.
  • the gesture feature of the subject to be recognized is determined through image information, and a corresponding application function is executed according to the gesture feature.
  • the biometric identification-based positioning device can pre-store gesture feature information that triggers various application functions, and when the gesture feature of the subject to be identified corresponds to the pre-stored finger feature information, the corresponding application function is triggered. In this way, the triggering of the application function can be facilitated, the user operation efficiency can be improved, and the user operation experience can be optimized.
  • the finger biometric features of the target user are extracted from the finger pictures; when performing finger recognition and positioning, the image information of the current subject to be identified is collected, and the image information is combined with the finger biometric features. Compare to determine whether the image information matches the finger biometrics; when judging that the image information matches the finger biometrics, determine that the subject to be recognized is the finger of the target user, and determine the finger pointing position of the target user according to the position of the subject to be recognized.
  • the accuracy of finger recognition and positioning and the efficiency of finger recognition can be improved, and the user experience of the auxiliary learning function can be optimized.
  • finger identification and positioning can be performed by matching the finger biometrics of the target user, so that the specificity of the auxiliary learning function can be realized and misidentification can be avoided.
  • FIG. 6 is a schematic structural diagram of a positioning device based on biometric identification provided in Embodiment 2 of the present application.
  • the biometric identification-based positioning device provided in this embodiment specifically includes: a collection module 21 , a matching module 22 and a positioning module 23 .
  • the collection module 21 is used to collect finger pictures of different angles corresponding to the finger of the target user in advance, and extract the finger biometric feature of the target user from the finger picture;
  • the matching module 22 is used to collect the image information of the current subject to be identified when performing finger recognition and positioning, compare the image information with the finger biometrics, and determine whether the image information matches the finger biometrics;
  • the positioning module 23 is configured to determine that the subject to be identified is the finger of the target user when determining that the image information matches the biological characteristics of the finger, and determine the finger indication of the target user according to the position of the subject to be identified. Location.
  • the biological characteristics of the finger include one or more types of finger color, fingerprint path, finger joint characteristics, crescent shape, nail shape and hand shape characteristics.
  • the collection module 21 includes:
  • the input unit is used to input the finger picture into the pre-trained finger biometric detection model
  • the detection unit is configured to extract the target user's finger biometrics based on the finger biometrics detection model, and the finger biometrics detection model is pre-trained and constructed according to corresponding types of finger biometrics sample pictures.
  • the matching module 22 includes:
  • a first calculation unit configured to calculate a first similarity between the image information and each corresponding type of the finger biological feature
  • the first judging unit is used to calculate the average similarity based on multiple first similarities, and judge whether the average similarity reaches the set first similarity threshold, and if so, judge that the image information is consistent with the finger The biological characteristics match, if not, it is determined that the image information does not match the biological characteristics of the finger.
  • the matching module 22 also includes:
  • a second calculation unit configured to calculate a second degree of similarity between the image information and each corresponding type of the finger biological feature
  • a statistical unit configured to count the proportion information of each of the second similarities reaching the set second similarity threshold
  • the second judging unit judges whether the ratio information reaches a set ratio threshold, if yes, judges that the image information matches the finger biometrics, and if not, judges that the image information does not match the finger biometrics.
  • the collection module 21 includes:
  • the prompting unit is configured to output voice prompts to guide the target user to provide finger gestures at different angles when collecting finger pictures, and collect finger pictures corresponding to the target user's fingers in real time.
  • the positioning module 23 includes:
  • An identification unit configured to identify the fingertip features of the subject to be identified, determine the fingertip coordinates of the fingertip features in the indication image, and use the fingertip coordinates as the finger pointing position of the target user, and the indication
  • the image is collected according to the content corresponding to the learning page indicated by the subject to be identified.
  • the finger biometric features of the target user are extracted from the finger pictures; when performing finger recognition and positioning, the image information of the current subject to be identified is collected, and the image information is combined with the finger biometric features. Compare to determine whether the image information matches the finger biometrics; when judging that the image information matches the finger biometrics, determine that the subject to be recognized is the finger of the target user, and determine the finger pointing position of the target user according to the position of the subject to be recognized.
  • the accuracy of finger recognition and positioning and the efficiency of finger recognition can be improved, and the user experience of the auxiliary learning function can be optimized.
  • finger identification and positioning can be performed by matching the finger biometrics of the target user, so that the specificity of the auxiliary learning function can be realized and misidentification can be avoided.
  • the positioning device based on biometric identification provided in Embodiment 2 of the present application can be used to execute the positioning method based on biometric identification provided in Embodiment 1 above, and has corresponding functions and beneficial effects.
  • Embodiment 3 of the present application provides an electronic device.
  • the electronic device includes: a processor 31 , a memory 32 , a communication module 33 , an input device 34 and an output device 35 .
  • the number of processors in the electronic device may be one or more, and the number of memories in the electronic device may be one or more.
  • the processor, memory, communication module, input device and output device of the electronic device can be connected through a bus or in other ways.
  • the memory 32 can be used to store software programs, computer-executable programs and modules, such as program instructions/modules corresponding to the positioning method based on biometric identification described in any embodiment of the present application (for example, based on acquisition module, matching module and positioning module in the positioning device for biometric identification).
  • the memory may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system and at least one application required by a function; the data storage area may store data created according to the use of the device, and the like.
  • the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage devices.
  • the memory may further include memory located remotely from the processor, which remote memory may be connected to the device via a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the communication module 33 is used for data transmission.
  • the processor 31 executes various functional applications and data processing of the device by running software programs, instructions and modules stored in the memory, that is, realizes the above-mentioned positioning method based on biometric identification.
  • the input device 34 can be used for receiving inputted numerical or character information, and generating key signal input related to user setting and function control of the device.
  • the output device 35 may include a display device such as a display screen.
  • the electronic device provided above can be used to implement the positioning method based on biometric identification provided in the first embodiment above, and has corresponding functions and beneficial effects.
  • the embodiment of the present application also provides a storage medium containing computer-executable instructions, the computer-executable instructions are used to execute a positioning method based on biometric identification when executed by a computer processor, and the positioning method based on biometric identification
  • the method includes: collecting finger pictures of different angles corresponding to the fingers of the target user in advance, and extracting the finger biological characteristics of the target user from the finger pictures; Comparing the image information with the biological features of the finger, and judging whether the image information matches the biological features of the finger; when determining that the image information matches the biological features of the finger, determining that the subject to be identified is the target The finger of the user, determining the pointing position of the finger of the target user according to the position of the subject to be identified.
  • storage medium any of various types of memory devices or storage devices.
  • storage medium is intended to include: installation media such as CD-ROMs, floppy disks, or tape drives; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Rambus RAM, etc. ; non-volatile memory, such as flash memory, magnetic media (eg hard disk or optical storage); registers or other similar types of memory elements, etc.
  • the storage medium may also include other types of memory or combinations thereof.
  • the storage medium may be located in a first computer system in which the program is executed, or may be located in a different second computer system connected to the first computer system through a network such as the Internet.
  • the second computer system may provide program instructions to the first computer for execution.
  • storage medium may include two or more storage media that reside in different locations, for example in different computer systems connected by a network.
  • the storage medium may store program instructions (eg embodied as computer programs) executable by one or more processors.
  • the computer-executable instructions are not limited to the above-mentioned positioning method based on biometric identification, and may also execute the method provided in any embodiment of the present application Related operations in biometric-based positioning methods.
  • the positioning device, storage medium, and electronic device based on biometric identification provided in the above embodiments can execute the positioning method based on biometric identification provided in any embodiment of the present application.
  • the technical details not described in detail in the above embodiments can be Refer to the positioning method based on biometric identification provided in any embodiment of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

基于生物特征识别的定位方法及装置。通过预先对应目标用户的手指采集不同角度的手指图片,从手指图片提取目标用户的手指生物特征;在进行手指识别定位时,采集当前待识别主体的图像信息,将图像信息与手指生物特征比对,判断图像信息与手指生物特征是否匹配;在判定图像信息与手指生物特征匹配时,确定待识别主体为目标用户的手指,根据待识别主体的位置确定目标用户的手指指示位置。通过预先收集目标用户的手指生物特征,基于手指生物特征匹配进行手指识别定位,以此可以提升手指识别定位精度和手指识别效率,优化辅助学习功能的使用体验。

Description

基于生物特征识别的定位方法及装置 技术领域
本申请实施例涉及智能终端技术领域,尤其涉及基于生物特征识别的定位方法及装置。
背景技术
目前,随着智能终端设备的快速发展,越来越多的智能终端设备(如家教机)可以满足学生的辅助学习需求。智能终端设备在实现其点读、解题、翻译等功能时,通过检测用户手指的位置,基于用户手指的识别定位执行相应的功能。
智能终端在对手指进行识别定位时。一般通过预先批量性收集不同人的手指样本图片,并进行模型训练,使模型达到一个普适的效果,以用于后续对手指进行检测识别。
但是,采用预先训练检测识别模型的方式,其采集和标注训练素材的工作量相对较大,且样本覆盖率无法保证。最终训练出的模型可能针对不同人的检测识别效果相差较大。对于采集样本少或者没有包含在训练样本内的人而言,可能无法保证较高的识别精度。
发明内容
本申请实施例提供基于生物特征识别的定位方法及装置,能够提升手指识别定位精度和手指识别效率,解决手指识别误差问题。
在第一方面,本申请实施例提供了一种基于生物特征识别的定位方法,包括:
预先对应目标用户的手指采集不同角度的手指图片,从所述手指图片提取所述目标用户的手指生物特征;
在进行手指识别定位时,采集当前待识别主体的图像信息,将所述图像信息与所述手指生物特征比对,判断所述图像信息与所述手指生物特征是否匹配;
在判定所述图像信息与所述手指生物特征匹配时,确定所述待识别主体为所述目标用户的手指,根据所述待识别主体的位置确定所述目标用户的手指指示位置。
进一步的,所述手指生物特征包括手指颜色、手指纹路、手指关节特征、 月牙白形状、指甲形状以及手型特征中的一种或多种类型。
进一步的,从所述手指图片提取所述目标用户的手指生物特征,包括:
将所述手指图片输入预先训练的手指生物特征检测模型;
基于所述手指生物特征检测模型提取所述目标用户的手指生物特征,所述手指生物特征检测模型预先根据对应类型的手指生物特征样本图片进行训练构建。
进一步的,在所述手指生物特征包括多种类型时,所述判断所述图像信息与所述手指生物特征是否匹配,包括:
计算所述图像信息与各个对应类型的所述手指生物特征的第一相似度;
根据多个所述第一相似度求取平均相似度,判断所述平均相似度是否达到设定的第一相似度阈值,若是,判定所述图像信息与所述手指生物特征匹配,若否,判定所述图像信息与所述手指生物特征不匹配。
进一步的,在所述手指生物特征包括多种类型时,所述判断所述图像信息与所述手指生物特征是否匹配,包括:
计算所述图像信息与各个对应类型的所述手指生物特征的第二相似度;
统计各个所述第二相似度达到设定的第二相似度阈值的比例信息;
判断所述比例信息是否达到设定的比例阈值,若是,判定所述图像信息与所述手指生物特征匹配,若否,判定所述图像信息与所述手指生物特征不匹配。
进一步的,所述预先对应目标用户的手指采集不同角度的手指图片,包括:
在采集手指图片时,输出语音提示以引导所述目标用户提供不同角度的手指姿态,并实时对应所述目标用户的手指采集手指图片。
进一步的,根据所述待识别主体的位置确定所述目标用户的手指指示位置,包括:
识别所述待识别主体的指尖特征,确定所述指尖特征在指示图像中的指尖坐标,以所述指尖坐标作为所述目标用户的手指指示位置,所述指示图像根据所述待识别主体指示学习页面对应内容进行采集。
在第二方面,本申请实施例提供了一种基于生物特征识别的定位装置,包括:
采集模块,用于预先对应目标用户的手指采集不同角度的手指图片,从所述手指图片提取所述目标用户的手指生物特征;
匹配模块里,用于在进行手指识别定位时,采集当前待识别主体的图像信 息,将所述图像信息与所述手指生物特征比对,判断所述图像信息与所述手指生物特征是否匹配;
定位模块,用于在判定所述图像信息与所述手指生物特征匹配时,确定所述待识别主体为所述目标用户的手指,根据所述待识别主体的位置确定所述目标用户的手指指示位置。
在第三方面,本申请实施例提供了一种电子设备,包括:
存储器以及一个或多个处理器;
所述存储器,用于存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如第一方面所述的基于生物特征识别的定位方法。
在第四方面,本申请实施例提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如第一方面所述的基于生物特征识别的定位方法。
本申请实施例通过预先对应目标用户的手指采集不同角度的手指图片,从手指图片提取目标用户的手指生物特征;在进行手指识别定位时,采集当前待识别主体的图像信息,将图像信息与手指生物特征比对,判断图像信息与手指生物特征是否匹配;在判定图像信息与手指生物特征匹配时,确定待识别主体为目标用户的手指,根据待识别主体的位置确定目标用户的手指指示位置。采用上述技术手段,通过预先收集目标用户的手指生物特征,基于手指生物特征匹配进行手指识别定位,以此可以提升手指识别定位精度和手指识别效率,优化辅助学习功能的使用体验。并且,通过目标用户的手指生物特征匹配进行手指识别定位,可以实现辅助学习功能的专用性,避免误识别的情况。
附图说明
图1是本申请实施例一提供的一种基于生物特征识别的定位方法的流程图;
图2是本申请实施例一中的手指生物特征检测流程图;
图3是本申请实施例一中的手指指示操作示意图;
图4是本申请实施例一中的手指生物特征比对的流程图;
图5是本申请实施例一中另一种手指生物特征比对方式的流程图;
图6是本申请实施例二提供的一种基于生物特征识别的定位装置的结构示意图;
图7是本申请实施例三提供的一种电子设备的结构示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面结合附图对本申请具体实施例作进一步的详细描述。可以理解的是,此处所描述的具体实施例仅仅用于解释本申请,而非对本申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本申请相关的部分而非全部内容。在更加详细地讨论示例性实施例之前应当提到的是,一些示例性实施例被描述成作为流程图描绘的处理或方法。虽然流程图将各项操作(或步骤)描述成顺序的处理,但是其中的许多操作可以被并行地、并发地或者同时实施。此外,各项操作的顺序可以被重新安排。当其操作完成时所述处理可以被终止,但是还可以具有未包括在附图中的附加步骤。所述处理可以对应于方法、函数、规程、子例程、子程序等等。
本申请提供的基于生物特征识别的定位方法,旨在通过采集目标用户的手指生物特征,通过手指生物特征进行手指识别定位,以此来提升手指识别定位效率和识别精度。相对于传统的手指识别方式,其在进行手指识别时,需要使用预先训练的检测识别模型对手指进行检测识别,为了达到一定识别精度,检测识别模型需要预先批量性收集不同人的手指样本图片,并进行模型训练,整个过程相对较为繁杂。并且,对于不包含在训练样本内的人而言,检测识别模型无法保证较高的识别精度,。基于此,提供本申请实施例的一种基于生物特征识别的定位方法,以解决现有手指识别定位技术存在的识别误差问题。
实施例一:
图1给出了本申请实施例一提供的一种基于生物特征识别的定位方法的流程图,本实施例中提供的基于生物特征识别的定位方法可以由基于生物特征识别的定位设备执行,该基于生物特征识别的定位设备可以通过软件和/或硬件的方式实现,该基于生物特征识别的定位设备可以是两个或多个物理实体构成,也可以是一个物理实体构成。一般而言,该基于生物特征识别的定位设备可以是学习机、手机,平板或电脑等智能终端设备。
下述以该基于生物特征识别的定位设备为执行基于生物特征识别的定位方法的主体为例,进行描述。参照图1,该基于生物特征识别的定位方法具体包括:
S110、预先对应目标用户的手指采集不同角度的手指图片,从所述手指图 片提取所述目标用户的手指生物特征。
具体的,本申请实施例在进行手指识别时,采用预采集目标用户手指生物特征的方式,以用于后续进行手指识别。通过该基于生物特征识别的定位设备对目标用户的手指图片进行采集,进而基于手指图片检测目标用户的手指生物特征。
可选的,该基于生物特征识别的定位设备在采集手指图片时,输出语音提示以引导所述目标用户提供不同角度的手指姿态,并实时对应所述目标用户的手指采集手指图片。根据实际辅助学习场景中,用户手指指示学习页面的不同手指姿态,本申请实施例对应进行这些手指姿态的手指图片采集,以实现手指指示位置的精准识别。
以学习机为例,目标用户在使用学习机的点读、解题、翻译等辅助学习功能之前,通过与学习机进行人机交互,调出手指生物特征的采集页面,此时目标用户通过开启学习机的摄像头,利用摄像头对拍摄目标用户的手指图片。摄像头开启之后,学习机同步通过扬声器输出语音提示信息,引导当前目标用户以对应的手指姿态进行手指图片拍摄。一般而言,学习机会预先设定若干条语音提示信息,以分别引导目标用户以相应的手指姿态进行手指图片拍摄。并且,学习机每拍摄一张手指图片,会对该图片进行有效性检测,判断当前采集到的对应手指姿态的手指图片是否有效。对于确定无效的图片,需要语音提示目标用户重新进行对应手指姿态的手指图片采集。以此,即可完成对应目标用户的手指图片采集。需要说明的是,本申请实施例对应每一个需要使用辅助学习功能的目标用户,都需要预先进行手指图片的采集,以用于后续对该目标用户的手指识别。通过针对性采集目标用户的手指图片,可以实现目标用户手指的针对性识别定位,提升手指识别精度,并实现辅助学习功能的专用性,优化用户使用体验。可以理解的是,由于需要预先采集手指图片方能够进行后续的手指识别,则对于未预先录入手指图片的用户而言,无法对其进行手指识别,而对于预先录入手指图片的目标用户而言,可以直接根据特征比对匹配的方式进行手指识别,整个过程相对高效,其识别精度也相对较高。
具体的,本申请实施例基于采集到的目标用户的手指图片,通过从手指图片中检测识别出目标用户的手指生物特征,以用于后续的手指识别。其中参照图2,从所述手指图片提取所述目标用户的手指生物特征,包括:
S1101、将所述手指图片输入预先训练的手指生物特征检测模型;
S1102、基于所述手指生物特征检测模型提取所述目标用户的手指生物特征,所述手指生物特征检测模型预先根据对应类型的手指生物特征样本图片进行训练构建。
本申请实施例中,所述手指生物特征包括手指颜色、手指纹路、手指关节特征、月牙白形状、指甲形状以及手型特征中的一种或多种类型。可以理解的是,对于不同人的手指,其上述各类型手指生物特征都会有不同。则预先采集手指生物特征的类型越多,对应目标用户手指的识别精度就越高,通过采集目标用户各类型的手指生物特征,以用于后续进行目标用户的手指识别定位,以此可以实现更精准的手指识别。
在进行手指生物特征识别时,通过一个预先构建的手指生物特征检测模型进行各类型手指生物特征的检测识别。手指生物特征检测模型为基于卷积神经网络的目标检测模型,其通过预先对应各类型手指生物特征的样本图像进行模型训练,以对各类型手指生物特征进行检测。其中,对应不同类型的手指生物特征检测模型,可以分别进行模型训练,后续将手指图片输入各个手指生物特征检测模型,以实现对应手指生物特征的检测。
需要说明的是,由于对应目标用户采集了多张不同手指姿态的手指图片,因此在检测手指生物特征时,对应每一个类型的手指生物特征,会得到表征该手指生物特征的多个检测结果,这些检测结果与不同手指姿态对应。并且,对应不同类型的手指生物特征,其信息格式不同。例如,对应手指颜色,手指生物特征可以使用相应的颜色值标识,对于手指纹路、手指关节特征、月牙白形状、指甲形状以及手型特征,可以用相应的图片截图或者轮廓图进行标识。
S120、在进行手指识别定位时,采集当前待识别主体的图像信息,将所述图像信息与所述手指生物特征比对,判断所述图像信息与所述手指生物特征是否匹配。
示例性的,以点读学习场景为例,对目标用户的手指点读识别进行描述。参照图3,在使用学习机进行点读学习时,将书本放于学习机前置摄像头或者外置摄像模组的拍摄范围内。学习机竖直放置,对应学习机屏幕一面顶部设置的摄像头朝向书本放置处,通过摄像头实时对包含书本页面的点读图像进行拍摄,并对点读图像进行待识别主体的检测识别。当识别到用户使用预存手指生物特征的对应手指指示书本页面进行点读时,学习机通过确定用户当前所指示的书本页面的内容作为点读内容,根据点读内容查询对应的反馈内容并进行读 音播放、习题解答等点读解析操作。一般而言,学习机会对应设置原始页面数据库,原始页面数据库对应书本的各个页面存储原始页面数据,对应每一个书本页面的原始页面数据可以包含多个对应的习题数据,并且对应每一个习题数据存在相对应的习题解答、读音音频等反馈内容。当确定用户所指示的点读内容时(即习题数据),即可确定用户当前所提问的习题数据,并对应确定反馈内容以反馈给用户,实现学习机的点读学习操作。进一步的,为了确定用户的点读内容,需要先确定用户当前指示的书本页面,定义其为点读图像。并进一步根据用户指示的该书本页面上的点读坐标即可确定点读内容,基于点读内容查询得到对应的习题数据。可选的,由于用户在点读时拍摄的点读图像存在手指遮挡书本页面区域的情况,遮挡导致的干扰容易影响原始页面数据的查询,因此,本申请实施例在用户进行点读前,会预先进行书本页面区域的标记,以用于后续进行原始页面数据的查询。
具体的,该基于生物特征识别的定位设备在进行手指识别定位时,通过获取当前用户的点读图像,在点读图像中截取出当前待识别主体的图像信息。由于本申请实施例是针对手指生物特征进行手指识别的,因此在截取待识别主体的图像信息时,可以通过一个手指检测模型检测点读图像是否存在手指指示书本页面,只有在确定当前书本页面上存在手指指示点读的情况,方才进行点读图像采集,并从中截取当前待识别主体的图像信息。以此可以减少设备误识别的情况,提升手指识别检测精度。需要说明的是,由于本申请实施例预先针对目标用户采集手指生物特征,因此,设备通过确定当前待识别主体是否与预存的手指生物特征匹配,即可确定是否对当前点读操作进行反馈。
进一步的,在采集到当前待识别主体的图像信息之后,本申请实施例将该图像信息与预存手指生物特征进行比对,判断该图像信息与目标用户的手指是否匹配。一般而言,如若预存的手指生物特征仅为手指颜色、手指纹路、手指关节特征、月牙白形状、指甲形状以及手型特征中的一种,则直接根据比对该图像信息与指定类型的手指生物特征的相似度,在相似度满足设定相似度阈值时,则认为该图像信息与目标用户的手指匹配。而在预存手指生物特征包括多种类型时,则需要将图像信息分别与各个手指生物特征进行比对。其中,参照图4,在手指生物特征包括多种类型时,所述判断所述图像信息与所述手指生物特征是否匹配,包括:
S1201、计算所述图像信息与各个对应类型的所述手指生物特征的第一相似 度;
S1202、根据多个所述第一相似度求取平均相似度,判断所述平均相似度是否达到设定的第一相似度阈值,若是,判定所述图像信息与所述手指生物特征匹配,若否,判定所述图像信息与所述手指生物特征不匹配。
对应多种类型的手指生物特征,将其分别与图像信息进行比对,在进行比对时,参照上述各类型手指生物特征的检测提取方式,使用对应的检测模型,从当前待识别主体的图像信息中检测提取对应的手指特征数据。例如,对应手指颜色,则通过检测图像信息中手指的颜色值,将其与标识手指颜色的手指生物特征进行比对,计算两者的第一相似度。同样的,对应手指纹路、手指关节特征、月牙白形状、指甲形状以及手型特征等类型的手指生物特征比对,则通过检测模型从检测图像信息对应手指特征数据,将这些手指特征数据与上述对应类型的手指生物特征进行比对,计算两者的第一相似度。
进一步的,基于各类型手指生物特征比对确定的第一相似度,本申请实施例通过计算这些第一相似度的平均相似度,以平均相似度表征当前待识别主体与目标用户手指的相似程度。可以理解的是,当平均相似度达到设定的第一相似度阈值时,则判定当前图像信息与预存的手指生物特征匹配,该待识别主体为目标用户的手指。
可选的,本申请实施例在手指生物特征包括多种类型时,还提供了另一种手指生物特征比对方式。参照图5,判断所述图像信息与所述手指生物特征是否匹配,包括:
S1203、计算所述图像信息与各个对应类型的所述手指生物特征的第二相似度;
S1204、统计各个所述第二相似度达到设定的第二相似度阈值的比例信息;
S1205、判断所述比例信息是否达到设定的比例阈值,若是,判定所述图像信息与所述手指生物特征匹配,若否,判定所述图像信息与所述手指生物特征不匹配。
同样的,通过将检测模型检测图像信息中对应的手指特征数据,将其与对应类型的手指生物特征比对确定对应的第二相似度。将各个第二相似度与预设置的第二相似度阈值进行比对。在此之前,系统预先设置各类型手指生物特征的第二相似度阈值,当手指生物特征数据达到第二相似度阈值时,则认为该手指生物特征数据与对应手指生物特征匹配。
进一步的,通过统计第二相似度达到设定的第二相似度阈值的数量,确定该数量占第二相似度的总量的比例,即可确定第二相似度达到设定的第二相似度阈值的比例。若该比例信息达到设定比例阈值,则认为该待识别主体为目标用户的手指。举例而言,对应手指纹路、手指关节特征、月牙白形状、指甲形状以及手型特征等五个类型的预存手指生物特征,分别计算图像信息与这五个手指生物特征的第二相似度,确定该图像信息手指纹路、手指关节特征、指甲形状以及手型特征四个类型的手指生物特征的第二相似度达到第二相似度阈值,则确定对应的比例信息为80%。将这一比例信息比对设定的比例阈值(如75%),确定该比例信息达到设定比例阈值,该待识别主体为目标用户的手指。
基于上述比对匹配方式,即可确定当前待识别主体的图像信息与预存手指生物特征是否匹配,确定该待识别主体是否为目标用户的手指。判断图像信息与手指生物特征是否匹配的实施方式有很多,本申请实施例对此不做固定限制,在此不多赘述。
S130、在判定所述图像信息与所述手指生物特征匹配时,确定所述待识别主体为所述目标用户的手指,根据所述待识别主体的位置确定所述目标用户的手指指示位置。
进一步的,在判定所述图像信息与所述手指生物特征匹配,确定当前待识别主体为目标用户的手指之后,即可以待识别主体的对应位置作为目标用户的手指指示位置,基于手指指示位置实现相应的辅助学习功能。
示例性的,以点读学习功能为例。在确定当前待识别主体为目标用户的手指之后,则对应目标用户当前的点读操作进行响应。反之,在判定所述图像信息与所述手指生物特征不匹配,确定当前待识别主体并非目标用户的手指后,则不启用点读功能,当前待识别主体的点读操作无效,设备输出相应的语音提示信息,以提示当前点读操作无效,需要预先录入手指生物特征。以此可以实现点读功能的专用性,对于未录入手指生物特征的用户(如非本机用户)而言,无法对点读功能进行使用。而对于预先录入手指生物特征的目标用户而言,可以实现快速、高效的手指识别,优化用户的使用体验。
具体的,在确定当前待识别主体为目标用户的手指之后,通过提取当前待识别主体点读书本页面的指示图像,基于指示图像确定待识别主体的指示位置作为点读位置。其中,基于该指示图像,通过识别所述待识别主体的指尖特征,确定所述指尖特征在指示图像中的指尖坐标,以所述指尖坐标作为所述待识别 主体的点读位置,所述指示图像根据所述待识别主体指示学习页面对应内容进行采集。
本申请实施例中,待识别主体的指尖特征可以为指尖弧线、指甲等。同样的,该基于生物特征识别的定位设备通过预先设定一个基于卷积神经网络的指尖特征检测模型,利用该指尖特征检测模型即可对指尖特征进行检测。其中,如果设备识别出指尖特征为指尖弧线,则可以计算得到该指尖弧线的第一中点,并且可以获取该第一中点在指示图像的坐标系中对应的坐标,将该坐标确定为指尖坐标;如果设备识别出指尖特征为指甲,可以通过图像识别技术分析得到该指甲的顶端边缘,并且可以确定指甲的顶端边缘的第二中点,进而可以获取该第二中点在指示图像的坐标系中对应的坐标,将该坐标确定为指尖坐标。
进一步的,设备从指示图像中确定与指尖坐标对应的若干个备选勾勒区域。本申请实施例中,该基于生物特征识别的定位设备可以预先将学习页面中的字符分割为若干个词语,根据分割得到的若干个词语确定勾勒区域,该确定的勾勒区域中可以包含任意一个词语或句子,且任意两个勾勒区域之间的位置关系可以为相邻关系,也可以为相离关系,但不能为相交关系。设备可以将与指尖坐标距离处于较近的范围内的若干个勾勒区域确定为备选勾勒区域,从而缩小目标勾勒区域的筛选范围。进一步获取各个备选勾勒区域的中心点在指示图像坐标系中的点坐标,进而可以根据勾股定理计算得到各个备选勾勒区域的中心点的点坐标与指尖坐标的距离,以使计算得到的距离更加精确。最终将距离最短的备选勾勒区域确定为目标勾勒区域,即待识别主体的点读位置。并将目标勾勒区域中包含的文字内容确定为点读内容,以此完成本申请实施例基于生物特征识别的定位。
可选的,该基于生物特征识别的定位设备在将距离最短的备选勾勒区域确定为目标勾勒区域,并将目标勾勒区域中包含的文字内容确定为点读内容时,可以将距离最短的备选勾勒区域确定为目标勾勒区域;设备通过判断目标勾勒区域的数量是否唯一,如果是,将目标勾勒区域中包含的文字内容确定为点读内容;如果否,从图像中识别得到用户手指指尖的指示方向,确定指尖坐标指向各个目标勾勒区域的中心点的目标方向。计算各个目标方向与指示方向的夹角,将夹角最小的目标方向对应的目标勾勒区域中包含的文字内容确定点读内容。
需要说明的是,实际应用中,通过确定待识别主体在指示图像上的指示位 置之后,根据当前用户使用的辅助学习功能,对应目标用户的这一指示操作进行响应。其中,参照上述手指定位方式确定待识别主体在指示图像中的指示位置,进而确定指示位置对应的学习内容。若当前使用的是翻译功能,则输出目标用户指示的学习内容所对应的翻译内容。若当前使用的是解题功能,则输出当前用户指示的学习内容所对应的习题解答。根据实际辅助学习功能的使用场景,通过目标用户手指的识别定位,对用户的指示操作进行响应,以实现相应的学习功能。
可选的,在一个实施例中,基于以识别到的目标用户的手指,还通过图像信息确定当前待识别主体的手势特征,根据该手势特征执行相应的应用功能。该基于生物特征识别的定位设备可以预先存储触发各类应用功能的手势特征信息,当待识别主体的手势特征与预存的手指特征信息对应时,则触发相应的应用功能。以此可以便于应用功能的触发,提升用户操作效率,优化用户操作体验。
上述,通过预先对应目标用户的手指采集不同角度的手指图片,从手指图片提取目标用户的手指生物特征;在进行手指识别定位时,采集当前待识别主体的图像信息,将图像信息与手指生物特征比对,判断图像信息与手指生物特征是否匹配;在判定图像信息与手指生物特征匹配时,确定待识别主体为目标用户的手指,根据待识别主体的位置确定目标用户的手指指示位置。采用上述技术手段,通过预先收集目标用户的手指生物特征,基于手指生物特征匹配进行手指识别定位,以此可以提升手指识别定位精度和手指识别效率,优化辅助学习功能的使用体验。并且,通过目标用户的手指生物特征匹配进行手指识别定位,可以实现辅助学习功能的专用性,避免误识别的情况。
实施例二:
在上述实施例的基础上,图6为本申请实施例二提供的一种基于生物特征识别的定位装置的结构示意图。参考图6,本实施例提供的基于生物特征识别的定位装置具体包括:采集模块21、匹配模块22和定位模块23。
采集模块21用于预先对应目标用户的手指采集不同角度的手指图片,从所述手指图片提取所述目标用户的手指生物特征;
匹配模块22用于在进行手指识别定位时,采集当前待识别主体的图像信息,将所述图像信息与所述手指生物特征比对,判断所述图像信息与所述手指 生物特征是否匹配;
定位模块23用于在判定所述图像信息与所述手指生物特征匹配时,确定所述待识别主体为所述目标用户的手指,根据所述待识别主体的位置确定所述目标用户的手指指示位置。
具体的,所述手指生物特征包括手指颜色、手指纹路、手指关节特征、月牙白形状、指甲形状以及手型特征中的一种或多种类型。
具体的,所述采集模块21包括:
输入单元,用于将所述手指图片输入预先训练的手指生物特征检测模型;
检测单元,用于基于所述手指生物特征检测模型提取所述目标用户的手指生物特征,所述手指生物特征检测模型预先根据对应类型的手指生物特征样本图片进行训练构建。
具体的,所述匹配模块22包括:
第一计算单元,用于计算所述图像信息与各个对应类型的所述手指生物特征的第一相似度;
第一判断单元,用于根据多个所述第一相似度求取平均相似度,判断所述平均相似度是否达到设定的第一相似度阈值,若是,判定所述图像信息与所述手指生物特征匹配,若否,判定所述图像信息与所述手指生物特征不匹配。
所述匹配模块22还包括:
第二计算单元,用于计算所述图像信息与各个对应类型的所述手指生物特征的第二相似度;
统计单元,用于统计各个所述第二相似度达到设定的第二相似度阈值的比例信息;
第二判断单元判断所述比例信息是否达到设定的比例阈值,若是,判定所述图像信息与所述手指生物特征匹配,若否,判定所述图像信息与所述手指生物特征不匹配。
具体的,所述采集模块21包括:
提示单元,用于在采集手指图片时,输出语音提示以引导所述目标用户提供不同角度的手指姿态,并实时对应所述目标用户的手指采集手指图片。
具体的,定位模块23包括:
识别单元,用于识别所述待识别主体的指尖特征,确定所述指尖特征在指示图像中的指尖坐标,以所述指尖坐标作为所述目标用户的手指指示位置,所 述指示图像根据所述待识别主体指示学习页面对应内容进行采集。
上述,通过预先对应目标用户的手指采集不同角度的手指图片,从手指图片提取目标用户的手指生物特征;在进行手指识别定位时,采集当前待识别主体的图像信息,将图像信息与手指生物特征比对,判断图像信息与手指生物特征是否匹配;在判定图像信息与手指生物特征匹配时,确定待识别主体为目标用户的手指,根据待识别主体的位置确定目标用户的手指指示位置。采用上述技术手段,通过预先收集目标用户的手指生物特征,基于手指生物特征匹配进行手指识别定位,以此可以提升手指识别定位精度和手指识别效率,优化辅助学习功能的使用体验。并且,通过目标用户的手指生物特征匹配进行手指识别定位,可以实现辅助学习功能的专用性,避免误识别的情况。
本申请实施例二提供的基于生物特征识别的定位装置可以用于执行上述实施例一提供的基于生物特征识别的定位方法,具备相应的功能和有益效果。
实施例三:
本申请实施例三提供了一种电子设备,参照图7,该电子设备包括:处理器31、存储器32、通信模块33、输入装置34及输出装置35。该电子设备中处理器的数量可以是一个或者多个,该电子设备中的存储器的数量可以是一个或者多个。该电子设备的处理器、存储器、通信模块、输入装置及输出装置可以通过总线或者其他方式连接。
存储器32作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本申请任意实施例所述的基于生物特征识别的定位方法对应的程序指令/模块(例如,基于生物特征识别的定位装置中的采集模块、匹配模块和定位模块)。存储器可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据设备的使用所创建的数据等。此外,存储器可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器可进一步包括相对于处理器远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
通信模块33用于进行数据传输。
处理器31通过运行存储在存储器中的软件程序、指令以及模块,从而执行 设备的各种功能应用以及数据处理,即实现上述的基于生物特征识别的定位方法。
输入装置34可用于接收输入的数字或字符信息,以及产生与设备的用户设置以及功能控制有关的键信号输入。输出装置35可包括显示屏等显示设备。
上述提供的电子设备可用于执行上述实施例一提供的基于生物特征识别的定位方法,具备相应的功能和有益效果。
实施例四:
本申请实施例还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行一种基于生物特征识别的定位方法,该基于生物特征识别的定位方法包括:预先对应目标用户的手指采集不同角度的手指图片,从所述手指图片提取所述目标用户的手指生物特征;在进行手指识别定位时,采集当前待识别主体的图像信息,将所述图像信息与所述手指生物特征比对,判断所述图像信息与所述手指生物特征是否匹配;在判定所述图像信息与所述手指生物特征匹配时,确定所述待识别主体为所述目标用户的手指,根据所述待识别主体的位置确定所述目标用户的手指指示位置。
存储介质——任何的各种类型的存储器设备或存储设备。术语“存储介质”旨在包括:安装介质,例如CD-ROM、软盘或磁带装置;计算机系统存储器或随机存取存储器,诸如DRAM、DDR RAM、SRAM、EDO RAM,兰巴斯(Rambus)RAM等;非易失性存储器,诸如闪存、磁介质(例如硬盘或光存储);寄存器或其它相似类型的存储器元件等。存储介质可以还包括其它类型的存储器或其组合。另外,存储介质可以位于程序在其中被执行的第一计算机系统中,或者可以位于不同的第二计算机系统中,第二计算机系统通过网络(诸如因特网)连接到第一计算机系统。第二计算机系统可以提供程序指令给第一计算机用于执行。术语“存储介质”可以包括驻留在不同位置中(例如在通过网络连接的不同计算机系统中)的两个或更多存储介质。存储介质可以存储可由一个或多个处理器执行的程序指令(例如具体实现为计算机程序)。
当然,本申请实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的基于生物特征识别的定位方法,还可以执行本申请任意实施例所提供的基于生物特征识别的定位方法中的相关操作。
上述实施例中提供的基于生物特征识别的定位装置、存储介质及电子设备 可执行本申请任意实施例所提供的基于生物特征识别的定位方法,未在上述实施例中详尽描述的技术细节,可参见本申请任意实施例所提供的基于生物特征识别的定位方法。
上述仅为本申请的较佳实施例及所运用的技术原理。本申请不限于这里所述的特定实施例,对本领域技术人员来说能够进行的各种明显变化、重新调整及替代均不会脱离本申请的保护范围。因此,虽然通过以上实施例对本申请进行了较为详细的说明,但是本申请不仅仅限于以上实施例,在不脱离本申请构思的情况下,还可以包括更多其他等效实施例,而本申请的范围由权利要求的范围决定。

Claims (10)

  1. 一种基于生物特征识别的定位方法,其特征在于,包括:
    预先对应目标用户的手指采集不同角度的手指图片,从所述手指图片提取所述目标用户的手指生物特征;
    在进行手指识别定位时,采集当前待识别主体的图像信息,将所述图像信息与所述手指生物特征比对,判断所述图像信息与所述手指生物特征是否匹配;
    在判定所述图像信息与所述手指生物特征匹配时,确定所述待识别主体为所述目标用户的手指,根据所述待识别主体的位置确定所述目标用户的手指指示位置。
  2. 根据权利要求1所述的基于生物特征识别的定位方法,其特征在于,所述手指生物特征包括手指颜色、手指纹路、手指关节特征、月牙白形状、指甲形状以及手型特征中的一种或多种类型。
  3. 根据权利要求2所述的基于生物特征识别的定位方法,其特征在于,从所述手指图片提取所述目标用户的手指生物特征,包括:
    将所述手指图片输入预先训练的手指生物特征检测模型;
    基于所述手指生物特征检测模型提取所述目标用户的手指生物特征,所述手指生物特征检测模型预先根据对应类型的手指生物特征样本图片进行训练构建。
  4. 根据权利要求2所述的基于生物特征识别的定位方法,在所述手指生物特征包括多种类型时,所述判断所述图像信息与所述手指生物特征是否匹配,包括:
    计算所述图像信息与各个对应类型的所述手指生物特征的第一相似度;
    根据多个所述第一相似度求取平均相似度,判断所述平均相似度是否达到设定的第一相似度阈值,若是,判定所述图像信息与所述手指生物特征匹配,若否,判定所述图像信息与所述手指生物特征不匹配。
  5. 根据权利要求2所述的基于生物特征识别的定位方法,在所述手指生物特征包括多种类型时,所述判断所述图像信息与所述手指生物特征是否匹配,包括:
    计算所述图像信息与各个对应类型的所述手指生物特征的第二相似度;
    统计各个所述第二相似度达到设定的第二相似度阈值的比例信息;
    判断所述比例信息是否达到设定的比例阈值,若是,判定所述图像信息与所述手指生物特征匹配,若否,判定所述图像信息与所述手指生物特征不匹配。
  6. 根据权利要求1所述的基于生物特征识别的定位方法,其特征在于,所述预先对应目标用户的手指采集不同角度的手指图片,包括:
    在采集手指图片时,输出语音提示以引导所述目标用户提供不同角度的手指姿态,并实时对应所述目标用户的手指采集手指图片。
  7. 根据权利要求1所述的基于生物特征识别的定位方法,其特征在于,根据所述待识别主体的位置确定所述目标用户的手指指示位置,包括:
    识别所述待识别主体的指尖特征,确定所述指尖特征在指示图像中的指尖坐标,以所述指尖坐标作为所述目标用户的手指指示位置,所述指示图像根据所述待识别主体指示学习页面对应内容进行采集。
  8. 一种基于生物特征识别的定位装置,其特征在于,包括:
    采集模块,用于预先对应目标用户的手指采集不同角度的手指图片,从所述手指图片提取所述目标用户的手指生物特征;
    匹配模块里,用于在进行手指识别定位时,采集当前待识别主体的图像信息,将所述图像信息与所述手指生物特征比对,判断所述图像信息与所述手指生物特征是否匹配;
    定位模块,用于在判定所述图像信息与所述手指生物特征匹配时,确定所述待识别主体为所述目标用户的手指,根据所述待识别主体的位置确定所述目标用户的手指指示位置。
  9. 一种电子设备,其特征在于,包括:
    存储器以及一个或多个处理器;
    所述存储器,用于存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-7任一所述的基于生物特征识别的定位方法。
  10. 一种包含计算机可执行指令的存储介质,其特征在于,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-7任一所述的基于生物特征识别的定位方法。
PCT/CN2021/103682 2021-06-30 2021-06-30 基于生物特征识别的定位方法及装置 WO2023272604A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/103682 WO2023272604A1 (zh) 2021-06-30 2021-06-30 基于生物特征识别的定位方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/103682 WO2023272604A1 (zh) 2021-06-30 2021-06-30 基于生物特征识别的定位方法及装置

Publications (1)

Publication Number Publication Date
WO2023272604A1 true WO2023272604A1 (zh) 2023-01-05

Family

ID=84692111

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/103682 WO2023272604A1 (zh) 2021-06-30 2021-06-30 基于生物特征识别的定位方法及装置

Country Status (1)

Country Link
WO (1) WO2023272604A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140294261A1 (en) * 2011-12-15 2014-10-02 Fujitsu Limited Biometric information processing apparatus and biometric information processing method
CN108764127A (zh) * 2018-05-25 2018-11-06 京东方科技集团股份有限公司 纹理识别方法及其装置
CN111078083A (zh) * 2019-06-09 2020-04-28 广东小天才科技有限公司 一种点读内容的确定方法及电子设备
US20200311379A1 (en) * 2015-12-08 2020-10-01 Nar Technology Co., Ltd Convergent biometric authentication method based on finger joint and finger vein, and apparatus therefor
CN111753715A (zh) * 2020-06-23 2020-10-09 广东小天才科技有限公司 点读场景下试题拍摄的方法、装置、电子设备和存储介质
CN111989689A (zh) * 2018-03-16 2020-11-24 识别股份有限公司 用于识别图像内目标的方法和用于执行该方法的移动装置
CN112749646A (zh) * 2020-12-30 2021-05-04 北京航空航天大学 一种基于手势识别的交互式点读系统
CN112817447A (zh) * 2021-01-25 2021-05-18 暗物智能科技(广州)有限公司 一种ar内容显示方法及系统

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140294261A1 (en) * 2011-12-15 2014-10-02 Fujitsu Limited Biometric information processing apparatus and biometric information processing method
US20200311379A1 (en) * 2015-12-08 2020-10-01 Nar Technology Co., Ltd Convergent biometric authentication method based on finger joint and finger vein, and apparatus therefor
CN111989689A (zh) * 2018-03-16 2020-11-24 识别股份有限公司 用于识别图像内目标的方法和用于执行该方法的移动装置
CN108764127A (zh) * 2018-05-25 2018-11-06 京东方科技集团股份有限公司 纹理识别方法及其装置
CN111078083A (zh) * 2019-06-09 2020-04-28 广东小天才科技有限公司 一种点读内容的确定方法及电子设备
CN111753715A (zh) * 2020-06-23 2020-10-09 广东小天才科技有限公司 点读场景下试题拍摄的方法、装置、电子设备和存储介质
CN112749646A (zh) * 2020-12-30 2021-05-04 北京航空航天大学 一种基于手势识别的交互式点读系统
CN112817447A (zh) * 2021-01-25 2021-05-18 暗物智能科技(广州)有限公司 一种ar内容显示方法及系统

Similar Documents

Publication Publication Date Title
JP7073522B2 (ja) 空中手書きを識別するための方法、装置、デバイス及びコンピュータ読み取り可能な記憶媒体
CN107656922B (zh) 一种翻译方法、装置、终端及存储介质
CN105631406B (zh) 图像识别处理方法和装置
WO2016172872A1 (zh) 用于验证活体人脸的方法、设备和计算机程序产品
TW201638816A (zh) 人機識別方法及裝置、行為特徵資料的採集方法及裝置
US10769417B2 (en) Payment method, apparatus, and system
KR20160099497A (ko) 핸드라이팅 인식 방법 및 장치
WO2017088727A1 (zh) 一种图像处理方法和装置
CN111353501A (zh) 一种基于深度学习的书本点读方法及系统
CN111079791A (zh) 人脸识别方法、设备及计算机可读存储介质
KR20210017090A (ko) 필기 입력을 텍스트로 변환하는 방법 및 전자 장치
CN110941992B (zh) 微笑表情检测方法、装置、计算机设备及存储介质
CN112001394A (zh) 基于ai视觉下的听写交互方法、系统、装置
CN112749646A (zh) 一种基于手势识别的交互式点读系统
Lai et al. Visual speaker identification and authentication by joint spatiotemporal sparse coding and hierarchical pooling
CN115291724A (zh) 人机交互的方法、装置、存储介质和电子设备
Shi et al. Visual speaker authentication by ensemble learning over static and dynamic lip details
CN110175500B (zh) 指静脉比对方法、装置、计算机设备及存储介质
CN111680177A (zh) 数据搜索方法及电子设备、计算机可读存储介质
US20200304708A1 (en) Method and apparatus for acquiring an image
CN113449652A (zh) 基于生物特征识别的定位方法及装置
CN110858291A (zh) 字符切分方法及装置
WO2023272604A1 (zh) 基于生物特征识别的定位方法及装置
CN112949689A (zh) 图像识别方法、装置、电子设备及存储介质
CN111079736B (zh) 一种听写内容识别方法及电子设备

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE