CN111382598A - Identification method and device and electronic equipment - Google Patents

Identification method and device and electronic equipment Download PDF

Info

Publication number
CN111382598A
CN111382598A CN201811615801.3A CN201811615801A CN111382598A CN 111382598 A CN111382598 A CN 111382598A CN 201811615801 A CN201811615801 A CN 201811615801A CN 111382598 A CN111382598 A CN 111382598A
Authority
CN
China
Prior art keywords
gesture
writing
video image
hand
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811615801.3A
Other languages
Chinese (zh)
Inventor
辛晓哲
秦波
李瑞楠
孙博
王帅
黄海兵
李斌
陈伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN201811615801.3A priority Critical patent/CN111382598A/en
Publication of CN111382598A publication Critical patent/CN111382598A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Abstract

The embodiment of the invention provides an identification method, an identification device and electronic equipment, wherein the method comprises the following steps: entering a writing state, and acquiring a first hand video image from a video image acquired by a camera; performing image processing on the first hand video image, acquiring fingertip position information of a writing finger and displaying a writing track on a display screen according to the fingertip position information; after the writing state is finished, recognizing the writing track according to the fingertip position information to obtain corresponding candidate information and displaying the candidate information; furthermore, the recognition of the writing track is realized through the fingertip position information determined by image processing, the recognition is not required to be carried out by adopting depth information, and the recognition efficiency can be improved. In addition, because the embodiment of the invention does not need to adopt depth information for identification, a depth camera is not needed to collect video images, and the identification cost of space handwriting is reduced.

Description

Identification method and device and electronic equipment
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to an identification method, an identification device, and an electronic device.
Background
Characters as a commonly used information transmission and communication tool play an important role in a human-computer interaction system, and currently, widely used character input methods include: the handwriting device comprises a keyboard, a touch screen, a handwriting board and the like, wherein a user needs to contact with the handwriting device to write in the handwriting process by using the handwriting device; however, in some situations, for example, when speech recognition fails due to noise during driving, touch handwriting is particularly inconvenient, and therefore, space-isolated handwriting is produced.
At present, the industry carries out space handwriting recognition based on a depth camera, and the defect that the depth camera is needed to collect images at first, the cost is high, and depth information is needed in the actual recognition process, so that the recognition efficiency is low.
Disclosure of Invention
The embodiment of the invention provides a recognition method, which is used for reducing recognition cost of space handwriting and improving recognition efficiency.
Correspondingly, the embodiment of the invention also provides an identification device and electronic equipment, which are used for ensuring the realization and application of the method.
In order to solve the above problem, an embodiment of the present invention discloses an identification method, which specifically includes: entering a writing state, and acquiring a first hand video image from a video image acquired by a camera; performing image processing on the first hand video image, acquiring fingertip position information of a writing finger and displaying a writing track on a display screen according to the fingertip position information; and after the writing state is finished, recognizing the writing track according to the fingertip position information to obtain corresponding candidate information and displaying the candidate information.
Optionally, the acquiring a first hand video image from a video image captured by a camera includes: the method comprises the steps of obtaining a video image collected by a camera, and extracting an image of a handwriting area in the video image to serve as a first hand video image.
Optionally, the image processing the first hand video image includes: preprocessing the first hand video image to obtain a first preprocessed image; judging whether the hand gesture in the first hand video image is a writing gesture according to the first preprocessed image; and if the hand gesture in the first hand video image is a writing gesture, executing a step of acquiring fingertip position information of a writing finger.
Optionally, the preprocessing the first hand video image to obtain a first preprocessed image includes: performing background subtraction on the first hand video image by adopting a Gaussian mixture model, and extracting a foreground from the first hand video image to obtain a first image; filtering the first image by adopting the skin color model to obtain a second image; performing morphological processing on the second image to obtain a third image; and carrying out binarization processing on the third image to obtain a first preprocessed image.
Optionally, the determining, according to the first preprocessed image, whether a hand gesture in the first hand video image is a writing gesture includes: carrying out contour detection on the first preprocessed image, and determining a hand contour in the first preprocessed image; carrying out convex hull detection on the hand outline, and determining the number of the protruding fingers and the fingertip position information of each protruding finger; and if the number of the protruding fingers is matched with the set number of the protruding fingers corresponding to the writing gesture, determining that the gesture of the hand in the first hand video image is the writing gesture.
Optionally, the determining, according to the first preprocessed image, whether a hand gesture in the first hand video image is a writing gesture includes: inputting the first preprocessed image into a static gesture recognition model to obtain a probability score of each gesture, wherein the gestures at least comprise: a start writing gesture, a selecting gesture, a deleting gesture, and other gestures; and if the probability score of the writing gesture is the highest, determining that the gesture of the hand in the first hand video image is the writing gesture.
Optionally, the acquiring fingertip position information of the writing finger includes: the fingertip position information of the protruding finger is determined as the fingertip position information of the writing finger.
Optionally, the method further comprises: acquiring a second hand video image from the video image acquired by the camera; judging whether the hand gesture in the second hand video image is a writing starting gesture; and entering a writing state if the hand gesture in the second hand video image is a writing starting gesture.
Optionally, the method further comprises: and if the hand gesture in the first hand video image is a selection gesture, determining that the writing state is finished.
Optionally, the candidate information includes a plurality of candidate items, and after the presenting step, the method further includes: periodically polling each candidate item of the current page, and acquiring a third hand video image from the video image acquired by the camera; and when detecting that the hand gesture in the third hand video image is a writing starting gesture, correspondingly polling candidate items on the screen and entering a writing state again.
Optionally, the method further comprises: and if the gesture in the first hand video image is a deleting gesture and the fingertip position information of the writing finger is not acquired after the first hand video image enters the writing state, deleting the candidate item which is finally displayed on the screen.
Optionally, the method further comprises: and if the gesture in the first hand video image is a deleting gesture and the fingertip position information of the writing finger is acquired after the first hand video image enters the writing state, clearing the writing track displayed on the display screen.
Optionally, the recognizing the writing track according to the fingertip position information to obtain corresponding candidate information includes: inputting the fingertip position information into a handwriting engine so that the handwriting engine can identify a writing track according to the fingertip position information to obtain candidate information and return the candidate information; and receiving candidate information returned by the handwriting engine.
The embodiment of the invention also discloses an identification device, which specifically comprises: the acquisition module is used for entering a writing state and acquiring a first hand video image from a video image acquired by the camera; the image processing module is used for carrying out image processing on the first hand video image, acquiring fingertip position information of a writing finger and displaying a writing track on a display screen according to the fingertip position information; and the recognition module is used for recognizing the writing track according to the fingertip position information after the writing state is finished, and obtaining and displaying corresponding candidate information.
Optionally, the acquiring module is configured to acquire a video image acquired by a camera, and extract an image of a handwriting area in the video image as a first hand video image.
Optionally, the image processing module comprises: the preprocessing submodule is used for preprocessing the first hand video image to obtain a first preprocessed image; the gesture judgment submodule is used for judging whether the gesture of the hand in the first hand video image is a writing gesture or not according to the first preprocessed image; and the position acquisition submodule is used for acquiring fingertip position information of a writing finger if the hand gesture in the first hand video image is a writing gesture.
Optionally, the preprocessing sub-module is configured to perform background subtraction on the first hand video image by using a gaussian mixture model, and extract a foreground from the first hand video image to obtain a first image; filtering the first image by adopting the skin color model to obtain a second image; performing morphological processing on the second image to obtain a third image; and carrying out binarization processing on the third image to obtain a first preprocessed image.
Optionally, the gesture determination sub-module includes: the first judging unit is used for carrying out contour detection on the first preprocessed image and determining the hand contour in the first preprocessed image; carrying out convex hull detection on the hand outline, and determining the number of the protruding fingers and the fingertip position information of each protruding finger; and if the number of the protruding fingers is matched with the set number of the protruding fingers corresponding to the writing gesture, determining that the gesture of the hand in the first hand video image is the writing gesture.
Optionally, the gesture determination sub-module includes: a second judging unit, configured to input the first preprocessed image into a static gesture recognition model to obtain a probability score of each gesture, where the gesture at least includes: a start writing gesture, a selecting gesture, a deleting gesture, and other gestures; and if the probability score of the writing gesture is the highest, determining that the gesture of the hand in the first hand video image is the writing gesture.
Optionally, the position obtaining sub-module is configured to determine fingertip position information of the protruding finger as fingertip position information of the writing finger.
Optionally, the apparatus further comprises: the writing state starting determining module is used for acquiring a second hand video image from the video image acquired by the camera; judging whether the gesture corresponding to the hand in the second hand video image is a writing starting gesture; if the gesture corresponding to the hand in the second hand video image is a writing starting gesture, entering a writing state
Optionally, the apparatus further comprises: and the writing state ending determining module is used for determining that the writing state is ended if the hand gesture in the first hand video image is a selection gesture.
Optionally, the candidate information includes a plurality of candidates, and the apparatus further includes: the polling module is used for periodically polling each candidate item of the current page and acquiring a third hand video image from the video image acquired by the camera; and the screen-on module is used for correspondingly polling candidate items on the screen when detecting that the hand gesture in the third hand video image is a selection gesture.
Optionally, the apparatus further comprises: and the first deleting module is used for deleting the candidate item which is finally displayed on the screen if the gesture in the first hand video image is a deleting gesture and the fingertip position information of the writing finger is not acquired after the first hand video image enters the writing state.
Optionally, the apparatus further comprises: and the second deleting module is used for clearing the writing track displayed on the display screen if the gesture in the first hand video image is a deleting gesture and the fingertip position information of the writing finger is acquired after the first hand video image enters the writing state.
Optionally, the recognition module is configured to input the fingertip position information into a handwriting engine, so that the handwriting engine recognizes a writing track according to the fingertip position information, obtains candidate information, and returns the candidate information; and receiving candidate information returned by the handwriting engine.
The embodiment of the invention also discloses a readable storage medium, and when the instructions in the storage medium are executed by a processor of the electronic equipment, the electronic equipment can execute the identification method according to any one of the embodiments of the invention.
An embodiment of the present invention also discloses an electronic device, including a memory, and one or more programs, where the one or more programs are stored in the memory, and configured to be executed by one or more processors, and the one or more programs include instructions for: entering a writing state, and acquiring a first hand video image from a video image acquired by a camera; performing image processing on the first hand video image, acquiring fingertip position information of a writing finger and displaying a writing track on a display screen according to the fingertip position information; and after the writing state is finished, recognizing the writing track according to the fingertip position information to obtain corresponding candidate information and displaying the candidate information.
Optionally, the acquiring a first hand video image from a video image captured by a camera includes: the method comprises the steps of obtaining a video image collected by a camera, and extracting an image of a handwriting area in the video image to serve as a first hand video image.
Optionally, the image processing the first hand video image includes: preprocessing the first hand video image to obtain a first preprocessed image; judging whether the hand gesture in the first hand video image is a writing gesture according to the first preprocessed image; and if the hand gesture in the first hand video image is a writing gesture, executing a step of acquiring fingertip position information of a writing finger.
Optionally, the preprocessing the first hand video image to obtain a first preprocessed image includes: performing background subtraction on the first hand video image by adopting a Gaussian mixture model, and extracting a foreground from the first hand video image to obtain a first image; filtering the first image by adopting the skin color model to obtain a second image; performing morphological processing on the second image to obtain a third image; and carrying out binarization processing on the third image to obtain a first preprocessed image.
Optionally, the determining, according to the first preprocessed image, whether a hand gesture in the first hand video image is a writing gesture includes: carrying out contour detection on the first preprocessed image, and determining a hand contour in the first preprocessed image; carrying out convex hull detection on the hand outline, and determining the number of the protruding fingers and the fingertip position information of each protruding finger; and if the number of the protruding fingers is matched with the set number of the protruding fingers corresponding to the writing gesture, determining that the gesture of the hand in the first hand video image is the writing gesture.
Optionally, the determining, according to the first preprocessed image, whether a hand gesture in the first hand video image is a writing gesture includes: inputting the first preprocessed image into a static gesture recognition model to obtain a probability score of each gesture, wherein the gestures at least comprise: a start writing gesture, a selecting gesture, a deleting gesture, and other gestures; and if the probability score of the writing gesture is the highest, determining that the gesture of the hand in the first hand video image is the writing gesture.
Optionally, the acquiring fingertip position information of the writing finger includes: the fingertip position information of the protruding finger is determined as the fingertip position information of the writing finger.
Optionally, further comprising instructions for: acquiring a second hand video image from the video image acquired by the camera; judging whether the hand gesture in the second hand video image is a writing starting gesture; and entering a writing state if the hand gesture in the second hand video image is a writing starting gesture.
Optionally, further comprising instructions for: and if the hand gesture in the first hand video image is a selection gesture, determining that the writing state is finished.
Optionally, the candidate information includes a plurality of candidates, and after the presenting step, further includes instructions for: periodically polling each candidate item of the current page, and acquiring a third hand video image from the video image acquired by the camera; and when detecting that the hand gesture in the third hand video image is a writing starting gesture, correspondingly polling candidate items on the screen and entering a writing state again.
Optionally, further comprising instructions for: and if the gesture in the first hand video image is a deleting gesture and the fingertip position information of the writing finger is not acquired after the first hand video image enters the writing state, deleting the candidate item which is finally displayed on the screen.
Optionally, further comprising instructions for: and if the gesture in the first hand video image is a deleting gesture and the fingertip position information of the writing finger is acquired after the first hand video image enters the writing state, clearing the writing track displayed on the display screen.
Optionally, the recognizing the writing track according to the fingertip position information to obtain corresponding candidate information includes: inputting the fingertip position information into a handwriting engine so that the handwriting engine can identify a writing track according to the fingertip position information to obtain candidate information and return the candidate information; and receiving candidate information returned by the handwriting engine.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, when the writing state is determined, a first hand video image acquired from a video image is acquired from a camera, then the first hand video image is subjected to image processing, the fingertip position information of a writing finger is acquired, a writing track is displayed on a display screen according to the fingertip position information, and after the writing state is finished, the writing track is identified according to the fingertip position information, and corresponding candidate information is obtained and displayed; furthermore, the recognition of the writing track is realized through the fingertip position information determined by image processing, the recognition is not required to be carried out by adopting depth information, and the recognition efficiency can be improved. In addition, because the embodiment of the invention does not need to adopt depth information for identification, a depth camera is not needed to collect video images, and the identification cost of space handwriting is reduced.
Drawings
FIG. 1 is a flow chart of the steps of an embodiment of an identification method of the present invention;
FIG. 2a is a schematic diagram of a writing trace display interface according to an embodiment of the invention;
FIG. 2b is a schematic diagram of a candidate information presentation interface according to an embodiment of the invention;
FIG. 3a is a schematic diagram of a start writing gesture in accordance with an embodiment of the present invention;
FIG. 3b is a schematic diagram of a writing gesture in accordance with an embodiment of the present invention;
FIG. 3c is a schematic diagram of a selection gesture according to an embodiment of the present invention;
FIG. 3d is a diagram illustrating a delete gesture, in accordance with an embodiment of the present invention;
FIG. 4 is a flow chart of the steps of an alternative embodiment of an identification method of the present invention;
FIG. 5a is a schematic diagram of a handwritten area in a video image in accordance with an embodiment of the invention;
FIG. 5b is a schematic illustration of a fourth image according to an embodiment of the invention;
FIG. 5c is a schematic illustration of a sixth image according to an embodiment of the invention;
FIG. 5d is a diagram of a second preprocessed image according to one embodiment of the present invention;
FIG. 5e is a schematic view of a hand undergoing contour detection and flange detection in accordance with an embodiment of the present invention;
FIG. 5f is a flowchart of method steps for determining whether a gesture is a start writing gesture, in accordance with an embodiment of the present invention;
FIG. 5g is a diagram illustrating a polling candidate according to an embodiment of the present invention;
FIG. 5h is a diagram illustrating an alternative to on-screen display according to an embodiment of the present invention;
FIG. 6 is a block diagram of an embodiment of an identification device of the present invention;
FIG. 7 is a block diagram of an alternative embodiment of an identification appliance of the present invention;
FIG. 8 illustrates a block diagram of an electronic device for identification, according to an example embodiment;
fig. 9 is a schematic structural diagram of an electronic device for identification according to another exemplary embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
One of the core ideas of the embodiment of the invention is that in the process of space handwriting, a camera can be adopted to collect a video image, then the finger tip position information of a writing finger is obtained by carrying out image processing on the hand video image, and after the writing state is finished, the writing track is identified according to the obtained finger tip position information, so that the depth information is not required to be adopted for identification, and the identification efficiency is improved. Correspondingly, a depth camera is not needed to be adopted for collecting video images, and the identification cost of space handwriting is reduced.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of an identification method of the present invention is shown, which may specifically include the following steps:
and 102, entering a writing state, and acquiring a first hand video image from the video image acquired by the camera.
And 104, performing image processing on the first hand video image, acquiring fingertip position information of a writing finger, and displaying a writing track on a display screen according to the fingertip position information.
And step 106, after the writing state is finished, recognizing the writing track according to the fingertip position information to obtain corresponding candidate information and displaying the candidate information.
In the embodiment of the invention, the spaced handwriting equipment can adopt the camera to collect the video image of the spaced handwriting of the user, and the writing track of the spaced handwriting is identified by carrying out image processing on the video image; the method can adopt common cameras (such as RGB (red, green and blue) cameras) except the depth camera to collect video images, and further can reduce the identification cost of space handwriting. In addition, the camera may be a built-in camera of the spaced handwriting device, or may be an external camera, which is not limited in this embodiment of the present invention. In the embodiment of the invention, the space handwriting equipment can process the acquired video images in real time, namely, each time one frame of video image is acquired, the hand video image can be extracted from the frame of video image, and then the hand video image is processed.
In the process of space-separated handwriting, a user can inform space-separated handwriting equipment of starting writing by executing a writing starting gesture, then space-separated handwriting is carried out by adopting a writing gesture, and then the user informs space-separated handwriting equipment of finishing writing by executing a writing finishing gesture. And then after the user executes the gesture of starting writing, the spaced writing equipment can enter a writing state, then in the process that the user adopts the writing gesture to perform spaced writing, a hand video image can be obtained from the video image collected by the camera at each time, wherein in order to distinguish the hand video image obtained by the user between different gestures, the hand video image obtained after entering the writing state and before the writing state is finished can be called as a first hand video image. Then, image processing can be carried out on the first hand video image, such as background elimination, foreground extraction, hand contour detection, convex hull detection and the like, so as to obtain fingertip position information of the writing finger; after the fingertip position information is obtained, on one hand, the fingertip position information can be stored, so that the writing track can be identified according to the stored fingertip position information in the following process; on the other hand, a mapping relation can be searched based on the fingertip position information, a corresponding display position in the display screen is determined, then the display position in the display screen is marked, the current display position is connected with the previous display position, and then a corresponding writing track is displayed. When the user finishes writing, a writing finishing gesture can be executed, the spaced handwriting equipment can determine that the writing state is finished, then, the written track can be recognized by the recorded fingertip position information in the process from the writing state to the writing state finishing, and if a handwriting engine is called to recognize the written track, corresponding candidate information is determined and displayed; wherein the candidate information may include a plurality of candidates. A subsequent user can select a required candidate item to be displayed on a screen by executing a corresponding gesture, so that the input of space handwriting is realized.
In one example of the present invention, the process of the user using the space handwriting "search" word is as follows: executing a writing starting gesture, and enabling the spaced handwriting equipment to enter a writing state; then, a user writes 'search' by adopting a writing gesture, and correspondingly, the spaced handwriting equipment acquires a first hand video image from the video image acquired by the camera; then, carrying out image processing on the first hand video image, acquiring fingertip position information of a writing finger and displaying a writing track on a display screen according to the fingertip position information; as shown in fig. 2 a. After the user writes the 'search', executing a writing ending gesture, determining that the handwriting state is ended by the spaced handwriting equipment, and then identifying the writing track according to the fingertip position information to obtain corresponding candidate information and displaying the candidate information; as shown in fig. 2 b.
In summary, in the embodiment of the present invention, when it is determined that the writing state is entered, a first hand video image may be obtained from a video image acquired by a camera, then the first hand video image is subjected to image processing, fingertip position information of a writing finger is obtained, a writing track is displayed on a display screen according to the fingertip position information, and after the writing state is ended, the writing track is identified according to the fingertip position information, so as to obtain corresponding candidate information and display the candidate information; furthermore, the recognition of the writing track is realized through the fingertip position information determined by image processing, the recognition is not required to be carried out by adopting depth information, and the recognition efficiency can be improved. In addition, because the embodiment of the invention does not need to adopt depth information for identification, the embodiment of the invention also does not need to adopt a depth camera to acquire video images, thereby reducing the identification cost of space handwriting.
In an example of the invention, after the spaced handwriting equipment displays the candidate information corresponding to the writing track, the user can perform the writing starting operation to select the candidate item meeting the requirement, and correspondingly, the spaced handwriting equipment can screen the selected candidate item starting the writing operation and enter the writing state again, so that the user can write continuously. Of course, it should be understood that this is just one example of a gesture for selecting a candidate item, and in practical applications, a special gesture may be defined for performing the operation of selecting the candidate item.
In another example of the present invention, the candidate item on the screen is wrong due to a possible misoperation of the user, so after the candidate item is on the screen of the spaced handwriting equipment and enters the writing state again, the user can perform a deletion operation, and the spaced handwriting equipment can delete the candidate item corresponding to the screen; the subsequent user can perform the gesture of starting writing again and perform handwriting input again.
In another example of the present invention, when a user writes a certain character, the written stroke may have an error, and at this time, the user may execute a delete gesture, and the space handwriting device may delete the previously input stroke; the subsequent user can perform the gesture of starting writing again and perform handwriting input again.
The starting writing gesture can be a gesture used for informing the spaced handwriting equipment of starting writing by a user and can be a gesture used for a candidate item on a screen by the user; as shown in fig. 3a, a start writing gesture is shown in fig. 3a, i.e. palm towards camera and five fingers open. The writing gesture may refer to a gesture used by the user to write; as shown in fig. 3b, a writing gesture is shown in fig. 3b, i.e. palm towards the camera and index finger open, other four-point palmar curvatures. The deleting gesture can be a gesture used by a user to delete information such as writing tracks and candidate items displayed on a screen; as shown in fig. 3c, an end writing gesture is shown in fig. 3c, i.e. the palm is facing the camera, and the thumb and index finger are open, the other three points are curved in the centre of the palm. The selection gesture can be a gesture used for informing the spaced handwriting equipment of the completion of writing by the user; as shown in FIG. 3d, a selection gesture is shown in FIG. 3d, where the palm is facing the camera and five fingers are bending towards the palm center. The gesture can be set according to the requirement of the user, which is not limited in the embodiment of the present invention; in addition, the gesture may be referred to as a set gesture, a gesture other than the set gesture may be referred to as another gesture, and the other gesture may correspond to another function, which is not limited in this embodiment of the present invention.
For convenience of explaining the sequence of executing the gestures in the process of space handwriting by the user, the number of each gesture may be:
11. initiating a writing gesture; 12. a writing gesture; 13. selecting a gesture; 14. a delete gesture.
For example, a user may perform gestures in the following order: 11 → 12 → 13 → 11;
as another example, the user may perform the gestures in the following order: 11 → 12 → 13 → 11 → 14 → 11;
also for example, a user may perform gestures in the following order: 11 → 12 → 14 → 11 → 12 → 13 → 11;
as another example, the user may perform the gestures in the following order: 11 → 12 → 14 → 11 → 12 → 13 → 11 → 14 → 11.
Of course, during the process of writing a character by the user in the space, other combinations of the gestures may be included, and embodiments of the present invention are not illustrated one by one and are not limited thereto.
The space handwriting equipment judges whether to enter a writing state, acquire fingertip position information of a writing finger, judge whether to finish the writing state, delete a writing track, display on a screen, delete on-screen candidate items and the like based on a gesture corresponding to a hand in a hand video image. Therefore, in the embodiment of the present invention, after each frame of hand video image is acquired, the hand video image is subjected to image processing to determine the gesture corresponding to the hand in the hand video image, and then the corresponding operation is performed.
Referring to fig. 4, a flowchart illustrating steps of an embodiment of an identification method of the present invention is shown, which may specifically include the following steps:
and step 402, acquiring a second hand video image from the video image acquired by the camera.
In the embodiment of the invention, after the spaced handwriting equipment is started and before the hand gesture in the video image is determined to be the writing starting gesture, or after the hand gesture in the video image of the hand is determined to be the selecting gesture and before the hand gesture in the video image is determined to be the writing starting gesture, the video image of the hand can be obtained from the video image collected by the camera so as to judge whether the video image enters the writing state or not. The hand video image acquired in this process may be referred to as a second hand video image. In one example of the invention, a video image captured by a camera may be acquired, and an image may be extracted from the handwriting area as a second hand video image. In the embodiment of the invention, after the camera of the space-isolated handwriting equipment acquires the video image, the video image can be displayed on the display screen, the handwriting area such as the upper right corner area of the video image is identified in the displayed video image as shown in fig. 5a, and the area in the square frame indicated by a in fig. 5a is the handwriting area, so that the space-isolated handwriting is performed in the handwriting area by the user, the video image of the hand of the user can be conveniently acquired, and the identification accuracy is improved. Therefore, after the video image collected by the camera is obtained, the image corresponding to the handwriting area can be extracted from the video image as the second hand video image according to the area information of the handwriting area.
And step 404, judging whether the gesture corresponding to the hand in the second hand video image is a writing starting gesture.
After the second hand video image is acquired, performing gesture recognition on the second hand video image, and judging whether a gesture corresponding to a hand in the second hand video image is a writing starting gesture; if the gesture corresponding to the hand in the second hand video image is a writing starting gesture, it may be determined that the user needs to write, and the user may enter a writing state, and then step 406 is performed; if the gesture corresponding to the hand in the second hand video image is not the gesture for starting writing, it may be determined that the user does not need to write, and the second hand video image may be obtained from the video image acquired by the camera of the next frame, and step 402 is executed.
In an example of the present invention, one method for determining whether a gesture corresponding to a hand in the second hand video image is a writing start gesture may be that the second hand video image is preprocessed to obtain a second preprocessed image, and according to the second preprocessed image, whether the gesture in the second hand video image is a writing start gesture is determined; if the gesture in the second hand video image is a writing starting gesture, entering a writing state, and if the gesture in the second hand video image is not a writing starting gesture, acquiring a second hand video image from a video image acquired by a camera of a next frame, and executing step 402.
The step of preprocessing the second hand video image to obtain the second preprocessed image may include the following sub-steps:
and a substep A2, performing background subtraction on the second hand video image by adopting a Gaussian mixture model, and extracting a foreground from the second hand video image to obtain a fourth image.
And a substep A4, filtering the fourth image by adopting the skin color model to obtain a fifth image.
And a substep a6 of performing morphological processing on the fifth image to obtain a sixth image.
And a substep A8, performing binarization processing on the sixth image to obtain a second preprocessed image.
In the embodiment of the present invention, a gaussian mixture model may be used to perform background subtraction on the second hand video image, and a fourth image is obtained by extracting a foreground from the second hand video image, as shown in fig. 5 b. And filtering the fourth image by adopting a pre-trained skin color model to obtain a fifth image, wherein the skin color model can comprise a Bayesian skin color model. Then, the hand shape is restored by performing morphological processing on the fifth image, so that the accuracy of gesture recognition is improved, and a sixth image is obtained, which can be shown in fig. 5 c; then, the sixth image is binarized, and the image of the region corresponding to the hand is separated from the image corresponding to the background part, so that the subsequent hand gesture recognition is facilitated, and a second preprocessed image is obtained, which can be shown in fig. 5 d.
In an embodiment of the present invention, the manner of determining whether the gesture in the second hand video image is the gesture for starting writing according to the second preprocessed image may include multiple manners, where in an example of the present invention, a manner of determining whether the gesture in the second hand video image is the gesture for starting writing may include the following sub-steps:
substep a10, performing contour detection on the second preprocessed image, and determining a hand contour in the second preprocessed image;
substep A12, performing convex hull detection on the hand outline, and determining the number of the convex fingers and the position information of the top end of each convex finger;
and a substep A14 of judging whether the number of the protruding fingers is matched with the set number of the protruding fingers corresponding to the gesture of starting writing.
Sub-step A16, determining that the hand gesture in the second hand video image is a start writing gesture.
Wherein the hand contour may be determined by ignoring the effects of background, texture inside the hand, and noise interference in the second preprocessed image; convex hull detection is then performed based on the determined hand contour, and the outermost points of the hand contour (which may include position information identifying the tips of the individual fingers) are determined, connected to form a convex polygon as shown in fig. 5 e. Further determining which fingers are the protruding fingers according to the position information of the top ends of the fingers, and then counting the number of the protruding fingers; wherein, the position information of the top end of the protruding finger is the fingertip position information of the protruding finger. For example, as shown in fig. 3a, the writing start gesture may be 5 corresponding to the set number of the protruding fingers, and if the number of the protruding fingers is 5, it may be determined that the gesture of the hand in the second preprocessed image is the writing start gesture; if the number of the protruding fingers is not 5, it may be determined that the hand gesture in the second preprocessing is not the writing start gesture, and the process of determining whether the hand gesture in the second video image is the writing start gesture may be ended.
In another example of the present invention, another way of determining whether the hand gesture in the second hand video image is a writing start gesture may include the following sub-steps:
a substep a18, inputting the second preprocessed image into a static gesture recognition model to obtain a probability score of each gesture, wherein the gesture may include a setting gesture and other gestures;
and a substep A20 of determining whether the probability score for starting the writing gesture is highest.
Sub-step A22, determining that the hand gesture in the second hand video image is a start writing gesture.
The static gesture recognition model can be trained by adopting a set gesture and other gestures, for example, a writing starting gesture can be input into the static gesture recognition model to obtain probability scores corresponding to all gestures, then a loss function is calculated by adopting the probability scores corresponding to the writing starting gesture, and the weight of the static gesture recognition model is adjusted by adopting the loss function; of course, other setting gestures and other gestures may be adopted, and the static gesture recognition model is trained according to the above manner. And then, recognizing the hand gesture in the second preprocessed image by using a trained static gesture recognition model, then judging whether the probability score of the gesture for starting writing is the highest, if so, determining that the hand gesture in the second preprocessed image is the writing gesture, executing a substep A22, and if not, finishing the process of judging whether the hand gesture in the second hand video image is the gesture for starting writing.
In the embodiment of the present invention, after the sub-step A8 is completed, the sub-steps a10-a16 may be performed, and the sub-steps a18-a22 may also be performed; of course, the above two manners may be combined to determine whether the hand gesture in the second preprocessed image is the writing start gesture, and when the hand gesture in the second preprocessed image is determined to be the writing start gesture according to the first manner and the hand gesture in the second preprocessed image is determined to be the writing start gesture according to the second manner, the hand gesture in the second preprocessed image may be determined to be the writing start gesture. For example, after performing substeps a10-a14, if the number of protruding fingers matches the set number of protruding fingers corresponding to the gesture of starting writing, substeps a18-a22 are performed, and referring to fig. 5f, a flowchart of method steps for determining whether the gesture is a gesture of starting writing according to an embodiment of the present invention is shown.
And 406, acquiring a first hand video image from the video image acquired by the camera.
If it is determined that the gesture corresponding to the hand in the first hand video image is a writing starting gesture, entering a writing state, acquiring the first hand video image from the video image acquired by the camera, wherein the video image acquired by the camera can be acquired, and extracting an image of a handwriting area in the video image as the first hand video image.
And step 408, judging whether the gesture corresponding to the hand in the first hand video image is a writing gesture.
Preprocessing a first hand video image to obtain a first preprocessed image; judging whether the gesture in the first hand video image is a writing gesture according to the first preprocessed image; if the gesture in the first hand video image is a writing gesture, go to step 410; if the hand gesture in the first hand video image is a selection gesture, it is determined that the writing state is finished, and step 414 may be executed. If the gesture in the first hand video image is a delete gesture and the fingertip position information of the writing finger is not obtained after the writing state is entered, execute step 422. If the gesture in the first hand video image is a delete gesture and the fingertip position information of the writing finger is acquired after the writing state is entered, step 416 is executed.
The step of preprocessing the first hand video image to obtain a first preprocessed image may include the following sub-steps:
and a substep B2 of performing background subtraction on the first hand video image by using a gaussian mixture model, and extracting a foreground from the first hand video image to obtain a first image.
And a substep B4 of filtering the first image by adopting the skin color model to obtain a second image.
And a substep B6 of performing morphological processing on the second image to obtain a third image.
And a substep B8 of performing binarization processing on the third image to obtain a first preprocessed image.
Sub-step B10, performing contour detection on the first preprocessed image, and determining a hand contour in the first preprocessed image;
sub-step B12, performing convex hull detection on the hand outline, and determining the number of the convex fingers and the position information of the top ends of the convex fingers;
and a substep B14 of judging whether the number of the protruding fingers is matched with the set number of the protruding fingers corresponding to the writing gesture.
Sub-step B16, determining that the hand gesture in the first hand video image is a writing gesture.
A substep B18 of inputting the first preprocessed image into a static gesture recognition model to obtain a probability score of each gesture, wherein the gestures may include setting gestures and other gestures;
and a substep B20 of judging whether the probability score of the writing gesture is highest.
Sub-step B22, determining that the hand gesture in the first hand video image is a writing gesture.
The substeps B2-B22 are similar to the substeps A2-A22, and are not repeated herein.
And step 410, if the hand gesture in the first hand video image is determined to be a writing gesture, determining the fingertip position information of the protruding finger as the fingertip position information of the writing finger.
And step 412, displaying a writing track on a display screen according to the fingertip position information.
In the embodiment of the invention, if the writing gesture of the hand in the first hand video image is determined, it can be determined that the user is performing space handwriting, and the fingertip position information of the writing finger can be obtained at the moment; the writing gesture may be a gesture in which one finger is spread and the other fingers are bent as shown in fig. 3b, so that the protruding finger may be determined as the writing finger, and the fingertip position information of the protruding finger may be determined as the fingertip position information of the writing finger. Then, a mapping relation can be searched based on the fingertip position information, a corresponding display position in the display screen is determined, then the display position in the display screen is marked, the current display position is connected with the previous display position, and then a corresponding writing track is displayed. When the distance between two adjacent display positions exceeds a distance threshold value, interpolation can be carried out between the two adjacent display positions, and then the two adjacent display positions are connected through interpolation points, so that the displayed writing track is smoother. The first hand video image may then be obtained from the video image captured by the next camera frame, and step 406 is performed again.
And 414, if the hand gesture in the first hand video image is a selection gesture, recognizing a writing track according to the fingertip position information to obtain corresponding candidate information and displaying the candidate information.
In the embodiment of the invention, if the gesture of the hand in the first hand video image is determined to be the selection gesture, the writing of the user is determined to be finished, and the fingertip position information can be input into a handwriting engine at the moment, so that the handwriting engine recognizes the writing track according to the fingertip position information, candidate information is obtained, and the candidate information is returned; and receiving candidate information returned by the handwriting engine. Then, displaying candidate information, wherein the candidate information may include a plurality of candidate items, and the handwriting engine may return candidate scores corresponding to the candidate items in the candidate information while returning the candidate information; the candidates may be presented in order of high to low candidate scores and then step 418 may be performed.
And step 416, if the hand gesture in the first hand video image is a deletion gesture and the fingertip position information of the writing finger is acquired after the first hand video image enters the writing state, clearing the writing track displayed on the display screen.
In the embodiment of the invention, the user can delete the input writing track in the writing process, so that if the gesture of the hand in the first hand video image is determined to be the deleting gesture and the fingertip position information of the writing finger is acquired after the first hand video image enters the writing state, the track which is written before the user needs to be deleted can be determined, and therefore, the writing track displayed on the display screen can be cleared, and the fingertip position information stored after the first hand video image enters the writing state can be deleted. The subsequent user may perform the start writing gesture again, and re-perform the spaced handwriting, and the corresponding spaced handwriting device may perform step 402.
And 418, periodically polling all the candidate items of the current page, and acquiring a third hand video image from the video images acquired by the camera.
Step 420, when it is detected that the hand gesture in the third hand video image is a writing start gesture, the candidate item polled on the screen correspondingly enters a writing state again.
In the embodiment of the present invention, the number of the display candidates of each page in the candidate bar may be preset, and when the number of the candidates included in the candidate information is more than the number of the display candidates of one page, the candidate information may be displayed in two or more pages in the candidate bar. After the candidate information is displayed, if the user determines that the required candidate item exists in the current page, the corresponding candidate item can be selected from the current page. The method comprises the steps that a space handwriting device periodically polls each candidate item of a current page, and when a user determines that the polled candidate item is a candidate item needing to be selected, a handwriting starting gesture can be executed to select the corresponding polled candidate item; the user may continue to hold the selection gesture before polling for candidates that need to be selected. If the user determines that the current page does not have the required candidate item, the user can execute a page turning gesture, and correspondingly, the space-isolated handwriting equipment can turn to the next page; if the user determines that the required candidate item exists in the next page, the corresponding candidate item can be selected from the next page. The method comprises the steps that each candidate item of the next page can be polled periodically by the spaced handwriting equipment, and when the user determines that the polled candidate item is a candidate item needing to be selected, a handwriting starting gesture can be executed to select the corresponding polled candidate item; the user may continue to hold the selection gesture before polling for candidates that need to be selected. Therefore, the space handwriting device can periodically poll each candidate item in the current page, and simultaneously, the video image of the hand in the video image collected by the camera can be obtained, wherein the video image of the hand obtained after the candidate information is displayed and before the hand gesture in the video image of the hand is determined to be the writing starting gesture can be called as a third video image of the hand, and the polling period can be set as required. Then, judging whether the hand gesture in the third hand video image is a writing starting gesture, and when the hand gesture in the third hand video image is detected to be the writing starting gesture, executing step 420, wherein the candidate item polled correspondingly is displayed on the screen; meanwhile, the writing state can be entered again, and then step 406 can be executed, so that the user can continue writing after selecting the candidate item to be input, and the operation is smooth and simple. When the gesture of the hand in the third hand video image is detected to be a page turning gesture, page turning can be performed, and candidate items in the next page are displayed; execution then continues at step 418. When it is detected that the hand of the hand in the third hand video image is still the selection gesture, step 418 may be continuously performed, that is, the polling is continuously performed, and the third hand video image is obtained from the next frame of video image. As shown in fig. 2b, the first candidate is polled, and when the polling period is reached, if it is detected that the hand gesture in the third hand video image is not the writing starting gesture, the polling may be continued. As shown in fig. 5g, when the 6 th candidate is polled and before the next polling period arrives, if the gesture of the hand in the third hand video image is detected to be a writing starting gesture, the polled sixth candidate is displayed on the screen, as shown in fig. 5 h.
Step 422, if the gesture in the first hand video image is a delete gesture and the fingertip position information of the writing finger is not acquired after the first hand video image enters the writing state, deleting the candidate item on the last screen.
In the embodiment of the present invention, after the candidate item is displayed on the screen and enters the writing state again, the user may execute a delete gesture before executing the writing gesture to delete the candidate item displayed on the screen last, so that after entering the writing state and acquiring the first hand video image, if the gesture in the first hand video image is the delete gesture and the fingertip position information of the writing finger is not acquired after entering the writing state, it may be determined that the user needs to delete the candidate item displayed on the screen last, and at this time, the candidate item displayed on the screen last may be executed. And the user can delete the content which is displayed on the screen by executing the deleting operation.
In summary, in the embodiment of the present invention, when it is determined that the writing state is entered, a first hand video image may be obtained from a video image acquired by a camera, then the first hand video image is subjected to image processing, fingertip position information of a writing finger is obtained, a writing track is displayed on a display screen according to the fingertip position information, and after the writing state is ended, the writing track is identified according to the fingertip position information, so as to obtain corresponding candidate information and display the candidate information; furthermore, the recognition of the writing track is realized through the fingertip position information determined by image processing, the recognition is not required to be carried out by adopting depth information, and the recognition efficiency can be improved. In addition, because the embodiment of the invention does not need to adopt depth information for identification, a depth camera is not needed to collect video images, and the identification cost of space handwriting is reduced.
Secondly, the first hand video image can be subjected to background subtraction by adopting a Gaussian mixture model, and a foreground is extracted from the first hand video image to obtain a first image; filtering the first image by adopting the skin color model to obtain a second image; performing morphological processing on the second image to obtain a third image; carrying out binarization processing on the third image to obtain a first preprocessed image; and judging whether the hand gesture in the first hand video image is a writing gesture according to the first preprocessed image, so that the accuracy of subsequent recognition of the hand corresponding gesture in the first hand video image can be improved.
Further, in the embodiment of the present invention, in the process of determining whether the hand gesture in the first hand video image is a writing gesture according to the first preprocessed image, gesture recognition may be performed by performing contour detection and convex hull detection on the first preprocessed image; the hand gestures in the first preprocessed image can also be recognized by adopting a static gesture recognition model, so that the accuracy of gesture recognition is improved.
Further, in the embodiment of the present invention, after the presenting step, each candidate item of the current page may be periodically polled, and a third hand video image may be obtained from a video image acquired by a camera; when the gesture of the hand in the third hand video image is detected to be a writing starting gesture, the candidate item polled correspondingly on the screen enters a writing state again; and then not only can realize with the candidate item that the user selected on the screen, can also get into the state of writing once more to the user can continue to carry out and write the gesture and can carry out handwriting input, has reduced the operation from the candidate item of the screen to writing next character, and is simple and convenient swift, can improve user experience.
Further, in the embodiment of the present invention, if the gesture in the first hand video image is a delete gesture, and the fingertip position information of the writing finger is not acquired after the first hand video image enters the writing state, the candidate of the last screen is deleted; the user can delete the content which is displayed on the screen conveniently; and according to the fact that if the gesture in the first hand video image is a deleting gesture and the fingertip position information of the writing finger is acquired after the first hand video image enters the writing state, the writing track displayed on the display screen is cleared, the user can delete the input track conveniently, and the user experience is improved.
Thirdly, in the embodiment of the present invention, the fingertip position information may be input into a handwriting engine, so that the handwriting engine identifies a writing track according to the fingertip position information, obtains candidate information and returns the candidate information, and receives the candidate information returned by the handwriting engine; the accuracy of writing track recognition is improved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 6, a block diagram of an embodiment of an identification apparatus according to the present invention is shown, which may specifically include the following modules:
the acquisition module 602 is configured to enter a writing state and acquire a first hand video image from a video image acquired by a camera;
an image processing module 604, configured to perform image processing on the first hand video image, acquire fingertip position information of a writing finger, and display a writing track on a display screen according to the fingertip position information;
and the identifying module 606 is configured to identify the writing track according to the fingertip position information after the writing state is finished, obtain corresponding candidate information, and display the candidate information.
Referring to fig. 7, a block diagram of an alternative embodiment of an identification appliance of the present invention is shown.
In an optional embodiment of the present invention, the obtaining module 602 is configured to obtain a video image acquired by a camera, and extract an image of a handwriting area in the video image as a first hand video image.
In an alternative embodiment of the present invention, the image processing module 604 comprises:
a preprocessing submodule 6042 configured to preprocess the first hand video image to obtain a first preprocessed image;
a gesture determination sub-module 6044, configured to determine, according to the first preprocessed image, whether a gesture of a hand in the first hand video image is a writing gesture;
a position obtaining sub-module 6046, configured to obtain fingertip position information of a writing finger if the hand gesture in the first hand video image is a writing gesture.
In an optional embodiment of the present invention, the preprocessing sub-module 6042 is configured to perform background subtraction on the first hand video image by using a gaussian mixture model, and extract a foreground from the first hand video image to obtain a first image; filtering the first image by adopting the skin color model to obtain a second image; performing morphological processing on the second image to obtain a third image; and carrying out binarization processing on the third image to obtain a first preprocessed image.
In an optional embodiment of the present invention, the gesture determination sub-module 6044 includes:
a first judging unit 60442, configured to perform contour detection on the first preprocessed image, and determine a hand contour in the first preprocessed image; carrying out convex hull detection on the hand outline, and determining the number of the protruding fingers and the fingertip position information of each protruding finger; and if the number of the protruding fingers is matched with the set number of the protruding fingers corresponding to the writing gesture, determining that the gesture of the hand in the first hand video image is the writing gesture.
In an optional embodiment of the present invention, the gesture determination sub-module 6044 includes:
a second determining unit 60444, configured to input the first preprocessed image into a static gesture recognition model to obtain a probability score of each gesture, where the gesture at least includes: a start writing gesture, a selecting gesture, a deleting gesture, and other gestures; and if the probability score of the writing gesture is the highest, determining that the gesture of the hand in the first hand video image is the writing gesture.
In an optional embodiment of the present invention, the position obtaining sub-module 6046 is configured to determine fingertip position information of the protruding finger as fingertip position information of the writing finger.
In an optional embodiment of the present invention, the apparatus further comprises:
a writing state start determining module 608, configured to obtain a second hand video image from the video image acquired by the camera; judging whether the gesture corresponding to the hand in the second hand video image is a writing starting gesture; and if the gesture corresponding to the hand in the second hand video image is a writing starting gesture, entering a writing state.
In an optional embodiment of the present invention, the apparatus further comprises:
a writing state end determining module 610, configured to determine that the writing state is ended if the gesture of the hand in the first hand video image is a selection gesture.
In an optional embodiment of the present invention, the candidate information includes a plurality of candidates, and the apparatus further includes:
the polling module 612 is configured to poll each candidate item of the current page periodically, and acquire a third hand video image from the video image acquired by the camera;
and the screen-up module 614 is configured to, when it is detected that the gesture of the hand in the third hand video image is a gesture of starting writing, screen-up corresponds to the polled candidate item and enters a writing state again.
In an optional embodiment of the present invention, the apparatus further comprises:
a first deleting module 616, configured to delete the candidate item that is last displayed if the gesture in the first hand video image is a deleting gesture and the fingertip position information of the writing finger is not obtained after the first hand video image enters the writing state.
In an optional embodiment of the present invention, the apparatus further comprises:
a second deleting module 618, configured to clear the writing track displayed on the display screen if the gesture in the first hand video image is a deleting gesture and the fingertip position information of the writing finger is acquired after the first hand video image enters the writing state.
In an optional embodiment of the present invention, the recognition module 606 is configured to input the fingertip position information into a handwriting engine, so that the handwriting engine recognizes a writing track according to the fingertip position information, obtains candidate information, and returns the candidate information; and receiving candidate information returned by the handwriting engine.
In the embodiment of the invention, when the writing state is determined, a first hand video image can be acquired from a video image acquired by a camera, then the first hand video image is subjected to image processing, the fingertip position information of a writing finger is acquired, a writing track is displayed on a display screen according to the fingertip position information, and after the writing state is finished, the writing track is identified according to the fingertip position information, and corresponding candidate information is obtained and displayed; furthermore, the recognition of the writing track is realized through the fingertip position information determined by image processing, the recognition is not required to be carried out by adopting depth information, and the recognition efficiency can be improved. In addition, because the embodiment of the invention does not need to adopt depth information for identification, a depth camera is not needed to collect video images, and the identification cost of space handwriting is reduced.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
Fig. 8 is a block diagram illustrating a structure of an electronic device 800 for identification, according to an example embodiment. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 8, electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing elements 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power components 806 provide power to the various components of the electronic device 800. Power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 814 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 814 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the electronic device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform an identification method, the method comprising: entering a writing state, and acquiring a first hand video image from a video image acquired by a camera; performing image processing on the first hand video image, acquiring fingertip position information of a writing finger and displaying a writing track on a display screen according to the fingertip position information; and after the writing state is finished, recognizing the writing track according to the fingertip position information to obtain corresponding candidate information and displaying the candidate information.
Optionally, the acquiring a first hand video image from a video image captured by a camera includes: the method comprises the steps of obtaining a video image collected by a camera, and extracting an image of a handwriting area in the video image to serve as a first hand video image.
Optionally, the image processing the first hand video image includes: preprocessing the first hand video image to obtain a first preprocessed image; judging whether the hand gesture in the first hand video image is a writing gesture according to the first preprocessed image; and if the hand gesture in the first hand video image is a writing gesture, executing a step of acquiring fingertip position information of a writing finger.
Optionally, the preprocessing the first hand video image to obtain a first preprocessed image includes: performing background subtraction on the first hand video image by adopting a Gaussian mixture model, and extracting a foreground from the first hand video image to obtain a first image; filtering the first image by adopting the skin color model to obtain a second image; performing morphological processing on the second image to obtain a third image; and carrying out binarization processing on the third image to obtain a first preprocessed image.
Optionally, the determining, according to the first preprocessed image, whether a hand gesture in the first hand video image is a writing gesture includes: carrying out contour detection on the first preprocessed image, and determining a hand contour in the first preprocessed image; carrying out convex hull detection on the hand outline, and determining the number of the protruding fingers and the fingertip position information of each protruding finger; and if the number of the protruding fingers is matched with the set number of the protruding fingers corresponding to the writing gesture, determining that the gesture of the hand in the first hand video image is the writing gesture.
Optionally, the determining, according to the first preprocessed image, whether a hand gesture in the first hand video image is a writing gesture includes: inputting the first preprocessed image into a static gesture recognition model to obtain a probability score of each gesture, wherein the gestures at least comprise: a start writing gesture, a selecting gesture, a deleting gesture, and other gestures; and if the probability score of the writing gesture is the highest, determining that the gesture of the hand in the first hand video image is the writing gesture.
Optionally, the acquiring fingertip position information of the writing finger includes: the fingertip position information of the protruding finger is determined as the fingertip position information of the writing finger.
Optionally, the method further comprises: acquiring a second hand video image from the video image acquired by the camera; judging whether the hand gesture in the second hand video image is a writing starting gesture; and entering a writing state if the hand gesture in the second hand video image is a writing starting gesture.
Optionally, the method further comprises: and if the hand gesture in the first hand video image is a selection gesture, determining that the writing state is finished.
Optionally, the candidate information includes a plurality of candidate items, and after the presenting step, the method further includes: periodically polling each candidate item of the current page, and acquiring a third hand video image from the video image acquired by the camera; and when detecting that the hand gesture in the third hand video image is a writing starting gesture, correspondingly polling candidate items on the screen and entering a writing state again.
Optionally, the method further comprises: and if the gesture in the first hand video image is a deleting gesture and the fingertip position information of the writing finger is not acquired after the first hand video image enters the writing state, deleting the candidate item which is finally displayed on the screen.
Optionally, the method further comprises: and if the gesture in the first hand video image is a deleting gesture and the fingertip position information of the writing finger is acquired after the first hand video image enters the writing state, clearing the writing track displayed on the display screen.
Optionally, the recognizing the writing track according to the fingertip position information to obtain corresponding candidate information includes: inputting the fingertip position information into a handwriting engine so that the handwriting engine can identify a writing track according to the fingertip position information to obtain candidate information and return the candidate information; and receiving candidate information returned by the handwriting engine.
Fig. 9 is a schematic structural diagram of an electronic device 900 for identification according to another exemplary embodiment of the present invention. The electronic device 900 may be a server, which may vary widely depending on configuration or performance, and may include one or more Central Processing Units (CPUs) 922 (e.g., one or more processors) and memory 932, one or more storage media 930 (e.g., one or more mass storage devices) storing applications 942 or data 944. Memory 932 and storage media 930 can be, among other things, transient storage or persistent storage. The program stored on the storage medium 930 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 922 may be arranged to communicate with the storage medium 930 to execute a series of instruction operations in the storage medium 930 on the server.
The server may also include one or more power supplies 926, one or more wired or wireless network interfaces 950, one or more input-output interfaces 958, one or more keyboards 956, and/or one or more operating systems 941, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
An electronic device comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors the one or more programs including instructions for: entering a writing state, and acquiring a first hand video image from a video image acquired by a camera; performing image processing on the first hand video image, acquiring fingertip position information of a writing finger and displaying a writing track on a display screen according to the fingertip position information; and after the writing state is finished, recognizing the writing track according to the fingertip position information to obtain corresponding candidate information and displaying the candidate information.
Optionally, the acquiring a first hand video image from a video image captured by a camera includes: the method comprises the steps of obtaining a video image collected by a camera, and extracting an image of a handwriting area in the video image to serve as a first hand video image.
Optionally, the image processing the first hand video image includes: preprocessing the first hand video image to obtain a first preprocessed image; judging whether the hand gesture in the first hand video image is a writing gesture according to the first preprocessed image; and if the hand gesture in the first hand video image is a writing gesture, executing a step of acquiring fingertip position information of a writing finger.
Optionally, the preprocessing the first hand video image to obtain a first preprocessed image includes: performing background subtraction on the first hand video image by adopting a Gaussian mixture model, and extracting a foreground from the first hand video image to obtain a first image; filtering the first image by adopting the skin color model to obtain a second image; performing morphological processing on the second image to obtain a third image; and carrying out binarization processing on the third image to obtain a first preprocessed image.
Optionally, the determining, according to the first preprocessed image, whether a hand gesture in the first hand video image is a writing gesture includes: carrying out contour detection on the first preprocessed image, and determining a hand contour in the first preprocessed image; carrying out convex hull detection on the hand outline, and determining the number of the protruding fingers and the fingertip position information of each protruding finger; and if the number of the protruding fingers is matched with the set number of the protruding fingers corresponding to the writing gesture, determining that the gesture of the hand in the first hand video image is the writing gesture.
Optionally, the determining, according to the first preprocessed image, whether a hand gesture in the first hand video image is a writing gesture includes: inputting the first preprocessed image into a static gesture recognition model to obtain a probability score of each gesture, wherein the gestures at least comprise: a start writing gesture, a selecting gesture, a deleting gesture, and other gestures; and if the probability score of the writing gesture is the highest, determining that the gesture of the hand in the first hand video image is the writing gesture.
Optionally, the acquiring fingertip position information of the writing finger includes: the fingertip position information of the protruding finger is determined as the fingertip position information of the writing finger.
Optionally, further comprising instructions for: acquiring a second hand video image from the video image acquired by the camera; judging whether the hand gesture in the second hand video image is a writing starting gesture; and entering a writing state if the hand gesture in the second hand video image is a writing starting gesture.
Optionally, further comprising instructions for: and if the hand gesture in the first hand video image is a selection gesture, determining that the writing state is finished.
Optionally, the candidate information includes a plurality of candidates, and after the presenting step, further includes instructions for: periodically polling each candidate item of the current page, and acquiring a third hand video image from the video image acquired by the camera; and when detecting that the hand gesture in the third hand video image is a writing starting gesture, correspondingly polling candidate items on the screen and entering a writing state again.
Optionally, further comprising instructions for: and if the gesture in the first hand video image is a deleting gesture and the fingertip position information of the writing finger is not acquired after the first hand video image enters the writing state, deleting the candidate item which is finally displayed on the screen.
Optionally, further comprising instructions for: and if the gesture in the first hand video image is a deleting gesture and the fingertip position information of the writing finger is acquired after the first hand video image enters the writing state, clearing the writing track displayed on the display screen.
Optionally, the recognizing the writing track according to the fingertip position information to obtain corresponding candidate information includes: inputting the fingertip position information into a handwriting engine so that the handwriting engine can identify a writing track according to the fingertip position information to obtain candidate information and return the candidate information; and receiving candidate information returned by the handwriting engine.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The above detailed description of the identification method, the identification device and the electronic device provided by the present invention, and the specific examples are applied herein to explain the principle and the implementation of the present invention, and the above descriptions of the embodiments are only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. An identification method, comprising:
entering a writing state, and acquiring a first hand video image from a video image acquired by a camera;
performing image processing on the first hand video image, acquiring fingertip position information of a writing finger and displaying a writing track on a display screen according to the fingertip position information;
and after the writing state is finished, recognizing the writing track according to the fingertip position information to obtain corresponding candidate information and displaying the candidate information.
2. The method of claim 1, wherein the obtaining a first hand video image from a video image captured by a camera comprises:
the method comprises the steps of obtaining a video image collected by a camera, and extracting an image of a handwriting area in the video image to serve as a first hand video image.
3. The method of claim 1, wherein said image processing said first hand video image comprises:
preprocessing the first hand video image to obtain a first preprocessed image;
judging whether the hand gesture in the first hand video image is a writing gesture according to the first preprocessed image;
and if the hand gesture in the first hand video image is a writing gesture, executing a step of acquiring fingertip position information of a writing finger.
4. The method of claim 3, wherein the pre-processing the first hand video image to obtain a first pre-processed image comprises:
performing background subtraction on the first hand video image by adopting a Gaussian mixture model, and extracting a foreground from the first hand video image to obtain a first image;
filtering the first image by adopting the skin color model to obtain a second image;
performing morphological processing on the second image to obtain a third image;
and carrying out binarization processing on the third image to obtain a first preprocessed image.
5. The method of claim 3, wherein the determining whether the gesture of the hand in the first hand video image is a writing gesture according to the first preprocessed image comprises:
carrying out contour detection on the first preprocessed image, and determining a hand contour in the first preprocessed image;
carrying out convex hull detection on the hand outline, and determining the number of the protruding fingers and the fingertip position information of each protruding finger;
and if the number of the protruding fingers is matched with the set number of the protruding fingers corresponding to the writing gesture, determining that the gesture of the hand in the first hand video image is the writing gesture.
6. The method of claim 3, wherein the determining whether the gesture of the hand in the first hand video image is a writing gesture according to the first preprocessed image comprises:
inputting the first preprocessed image into a static gesture recognition model to obtain a probability score of each gesture, wherein the gestures at least comprise: a start writing gesture, a selecting gesture, a deleting gesture, and other gestures;
and if the probability score of the writing gesture is the highest, determining that the gesture of the hand in the first hand video image is the writing gesture.
7. The method according to claim 5, wherein the acquiring fingertip position information of the writing finger comprises:
the fingertip position information of the protruding finger is determined as the fingertip position information of the writing finger.
8. An identification device, comprising:
the acquisition module is used for entering a writing state and acquiring a first hand video image from a video image acquired by the camera;
the image processing module is used for carrying out image processing on the first hand video image, acquiring fingertip position information of a writing finger and displaying a writing track on a display screen according to the fingertip position information;
and the recognition module is used for recognizing the writing track according to the fingertip position information after the writing state is finished, and obtaining and displaying corresponding candidate information.
9. A readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the identification method according to any of the method claims 1-7.
10. An electronic device comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors the one or more programs including instructions for:
entering a writing state, and acquiring a first hand video image from a video image acquired by a camera;
performing image processing on the first hand video image, acquiring fingertip position information of a writing finger and displaying a writing track on a display screen according to the fingertip position information;
and after the writing state is finished, recognizing the writing track according to the fingertip position information to obtain corresponding candidate information and displaying the candidate information.
CN201811615801.3A 2018-12-27 2018-12-27 Identification method and device and electronic equipment Pending CN111382598A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811615801.3A CN111382598A (en) 2018-12-27 2018-12-27 Identification method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811615801.3A CN111382598A (en) 2018-12-27 2018-12-27 Identification method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111382598A true CN111382598A (en) 2020-07-07

Family

ID=71214489

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811615801.3A Pending CN111382598A (en) 2018-12-27 2018-12-27 Identification method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111382598A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112199015A (en) * 2020-09-15 2021-01-08 安徽鸿程光电有限公司 Intelligent interaction all-in-one machine and writing method and device thereof
CN112684895A (en) * 2020-12-31 2021-04-20 安徽鸿程光电有限公司 Marking method, device, equipment and computer storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142084A (en) * 2011-05-06 2011-08-03 北京网尚数字电影院线有限公司 Method for gesture recognition
US20120105613A1 (en) * 2010-11-01 2012-05-03 Robert Bosch Gmbh Robust video-based handwriting and gesture recognition for in-car applications
WO2013075466A1 (en) * 2011-11-23 2013-05-30 中兴通讯股份有限公司 Character input method, device and terminal based on image sensing module
CN103403650A (en) * 2012-10-31 2013-11-20 华为终端有限公司 Drawing control method, apparatus and mobile terminal
CN104036251A (en) * 2014-06-20 2014-09-10 上海理工大学 Method for recognizing gestures on basis of embedded Linux system
CN104793724A (en) * 2014-01-16 2015-07-22 北京三星通信技术研究有限公司 Sky-writing processing method and device
CN105320248A (en) * 2014-06-03 2016-02-10 深圳Tcl新技术有限公司 Mid-air gesture input method and device
CN107831894A (en) * 2017-11-06 2018-03-23 浙江工业大学 It is a kind of suitable for mobile terminal every empty-handed gesture writing on the blackboard method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120105613A1 (en) * 2010-11-01 2012-05-03 Robert Bosch Gmbh Robust video-based handwriting and gesture recognition for in-car applications
CN102142084A (en) * 2011-05-06 2011-08-03 北京网尚数字电影院线有限公司 Method for gesture recognition
WO2013075466A1 (en) * 2011-11-23 2013-05-30 中兴通讯股份有限公司 Character input method, device and terminal based on image sensing module
CN103403650A (en) * 2012-10-31 2013-11-20 华为终端有限公司 Drawing control method, apparatus and mobile terminal
CN104793724A (en) * 2014-01-16 2015-07-22 北京三星通信技术研究有限公司 Sky-writing processing method and device
CN105320248A (en) * 2014-06-03 2016-02-10 深圳Tcl新技术有限公司 Mid-air gesture input method and device
CN104036251A (en) * 2014-06-20 2014-09-10 上海理工大学 Method for recognizing gestures on basis of embedded Linux system
CN107831894A (en) * 2017-11-06 2018-03-23 浙江工业大学 It is a kind of suitable for mobile terminal every empty-handed gesture writing on the blackboard method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄孝建 著: "复杂环境中运动目标检测与跟踪研究", 北京:北京邮电大学出版社, pages: 216 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112199015A (en) * 2020-09-15 2021-01-08 安徽鸿程光电有限公司 Intelligent interaction all-in-one machine and writing method and device thereof
CN112199015B (en) * 2020-09-15 2022-07-22 安徽鸿程光电有限公司 Intelligent interaction all-in-one machine and writing method and device thereof
CN112684895A (en) * 2020-12-31 2021-04-20 安徽鸿程光电有限公司 Marking method, device, equipment and computer storage medium

Similar Documents

Publication Publication Date Title
CN105224195B (en) Terminal operation method and device
US11138422B2 (en) Posture detection method, apparatus and device, and storage medium
US11455491B2 (en) Method and device for training image recognition model, and storage medium
CN107239535A (en) Similar pictures search method and device
JP2018503201A (en) Region extraction method, model training method and apparatus
CN105095881A (en) Method, apparatus and terminal for face identification
CN106484138B (en) A kind of input method and device
EP4273742A1 (en) Handwriting recognition method and apparatus, electronic device, and medium
CN115176456A (en) Content operation method, device, terminal and storage medium
CN104735243A (en) Method and device for displaying contact list
WO2019007236A1 (en) Input method, device, and machine-readable medium
CN111382598A (en) Identification method and device and electronic equipment
CN110858291A (en) Character segmentation method and device
CN107422921B (en) Input method, input device, electronic equipment and storage medium
CN105094297A (en) Display content zooming method and display content zooming device
CN108073291B (en) Input method and device and input device
CN110519517B (en) Copy guiding method, electronic device and computer readable storage medium
CN113936697A (en) Voice processing method and device for voice processing
CN109740557B (en) Object detection method and device, electronic equipment and storage medium
WO2020034763A1 (en) Gesture recognition method, and gesture processing method and apparatus
US20230393649A1 (en) Method and device for inputting information
CN109542244B (en) Input method, device and medium
CN110968246A (en) Intelligent Chinese handwriting input recognition method and device
US10198614B2 (en) Method and device for fingerprint recognition
CN113873165A (en) Photographing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination