CN115185381A - Method and device for controlling terminal based on motion trail of head - Google Patents

Method and device for controlling terminal based on motion trail of head Download PDF

Info

Publication number
CN115185381A
CN115185381A CN202211118626.3A CN202211118626A CN115185381A CN 115185381 A CN115185381 A CN 115185381A CN 202211118626 A CN202211118626 A CN 202211118626A CN 115185381 A CN115185381 A CN 115185381A
Authority
CN
China
Prior art keywords
motion
determining
mode
abscissa
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211118626.3A
Other languages
Chinese (zh)
Inventor
丁军红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aerospace Aoxiang Ventilation Technology Co ltd
Original Assignee
Beijing Aerospace Aoxiang Ventilation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aerospace Aoxiang Ventilation Technology Co ltd filed Critical Beijing Aerospace Aoxiang Ventilation Technology Co ltd
Priority to CN202211118626.3A priority Critical patent/CN115185381A/en
Publication of CN115185381A publication Critical patent/CN115185381A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a method and a device for controlling a terminal based on a motion trail of a head. The method comprises the following steps: acquiring a plurality of face images captured at continuous time points, wherein the plurality of face images correspond to the same user; determining the motion track of the head according to the plurality of face images which are captured; determining a motion mode corresponding to the motion track; and controlling and operating the terminal according to the motion mode. This application is indulged and is in noisy environment at the user, and under the condition that both hands can not slide on the terminal, also can be based on the motion trail of head, control operation to the terminal.

Description

Method and device for controlling terminal based on motion trail of head
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and an apparatus for controlling a terminal based on a motion trajectory of a head.
Background
With the technical progress and social progress, a simple interaction mode performed by using external input devices such as a mouse, a keyboard, a touch pad and keys and the like and a terminal is not enough to meet the requirements of people. In order to make the interaction between the user and the terminal simpler, in recent years, the interaction with the terminal through means such as voice and gesture becomes a mainstream development trend. Specifically, the interaction between the voice and the terminal is realized by converting the voice of the user into characters through a voice recognition technology and controlling the terminal through an instruction corresponding to the characters. The gesture and the terminal are interacted through an interaction technology of recognizing a sliding track on the terminal and converting the sliding track into a terminal instruction to control the terminal behavior.
However, the voice interaction technology is only suitable for a quiet environment, and the gesture interaction technology is only suitable for an environment in which both hands can slide on the terminal. Like this when the user is in noisy environment, and when both hands can not slide on the terminal, can't interact with the terminal, reduced user's operation experience.
Disclosure of Invention
In view of this, the present application provides a method and an apparatus for controlling a terminal based on a motion trajectory of a head, so that even when a user is in a noisy environment and both hands cannot slide on the terminal, the terminal can be controlled and operated based on the motion trajectory of the head.
In order to achieve the above purpose, the present application mainly provides the following technical solutions:
in a first aspect, the present application provides a method for controlling a terminal based on a motion trajectory of a head, the method including:
acquiring a plurality of face images captured at continuous time points, wherein the plurality of face images correspond to the same user;
determining a motion track of the head according to the plurality of face images;
determining a motion mode corresponding to the motion track;
and controlling and operating the terminal according to the motion mode.
In a second aspect, the present application provides an apparatus for controlling a terminal based on a motion trajectory of a head, the apparatus comprising:
the system comprises an acquisition unit, a storage unit and a display unit, wherein the acquisition unit is used for acquiring a plurality of face images captured at continuous time points, and the plurality of face images correspond to the same user;
the first determining unit is used for determining the motion track of the head according to the plurality of face images acquired by the acquiring unit;
the second determining unit is used for determining a motion mode corresponding to the motion track determined by the first determining unit;
and the control unit is used for controlling and operating the terminal according to the motion mode determined by the second determination unit.
In a third aspect, the present application provides an electronic device comprising at least one processor, and at least one memory, a bus, connected to the processor; the processor and the memory complete mutual communication through a bus; the processor is configured to call program instructions in the memory to perform the above-described method of controlling a terminal based on a head movement trajectory.
In a fourth aspect, the present application provides a storage medium for storing a computer program, where the computer program controls, when running, an apparatus in which the storage medium is located to execute the method for controlling a terminal based on a head movement trajectory described above.
By means of the technical scheme, the application provides a method and a device for controlling a terminal based on a head movement track, wherein a plurality of face images captured at continuous time points are obtained, and the plurality of face images correspond to the same user; determining a motion track of the head according to the plurality of face images; determining a motion mode corresponding to the motion track; and controlling and operating the terminal according to the motion mode. It can be seen that, this application is indulged and is in noisy environment at the user, and under the condition that both hands can not slide on the terminal, also can be based on the motion trail of head, control operation to the terminal.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flow chart illustrating a method for controlling a terminal based on a head movement trajectory disclosed in the present application;
FIG. 2 is a schematic flow chart of a method of determining a motion trajectory as disclosed herein;
FIG. 3 is a schematic flow chart illustrating a method for determining a head-up mode according to the present disclosure;
FIG. 4 is a schematic flow chart diagram of a method of determining a heads-down mode as disclosed herein;
FIG. 5 is a schematic flow chart of a method of determining yaw mode disclosed herein;
FIG. 6 is a flow chart illustrating a method of determining a yaw mode disclosed herein;
fig. 7 is a flowchart illustrating a method of controlling a terminal according to the present disclosure;
fig. 8 is a schematic structural diagram of an apparatus for controlling a terminal based on a head movement trace disclosed in the present application;
fig. 9 is a schematic structural diagram of still another apparatus for controlling a terminal based on a head movement trace disclosed in the present application;
fig. 10 is a block diagram of an apparatus disclosed herein.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The embodiment of the application provides a method for controlling a terminal based on a head movement track, wherein an execution main body of the method is a terminal currently used by a user, and the terminal can recognize the movement track of the head of the user and control the operation of the terminal based on the movement track. The specific execution steps are shown in fig. 1, and include:
step 101, obtaining a plurality of face images captured at continuous time points.
Wherein, a plurality of face images correspond to the same user.
In a specific embodiment of this step, when the head movement of the user is detected, the face image captured at the current time point is obtained, and the face images captured at a preset number of consecutive time points before the current time point are obtained. Therefore, the face images captured at a plurality of continuous time points can be obtained.
And step 102, determining the motion track of the head according to the plurality of face images.
In a specific implementation manner of this step, each face image is identified, a key point in each face image is determined, and a position coordinate of each key point on the corresponding face image is determined, so as to obtain a position coordinate of each key point. And then, sequencing each position coordinate according to the time point corresponding to the face image in which each position coordinate is located to obtain the motion track of the key point, and determining the motion track as the motion track of the head.
In this application, the key point may be the tip of the nose. Specifically, the nose tip in each image is identified, and the position coordinates of the nose tip are obtained. After the position coordinates of the nose tip in each face image are obtained, the position coordinates of the nose tip in each face image are sequenced according to the time sequence of each face image, the motion track of the nose tip is obtained, and the motion track is determined as the motion track of the head.
And 103, determining a motion mode corresponding to the motion track.
The motion modes in the application comprise a head raising mode, a head lowering mode, a left head swinging mode, a right head swinging mode, a head right side semicircle drawing mode, a head left side semicircle drawing mode, a head forward motion mode and a head backward motion mode.
In the specific implementation manner of this step, for some motion trajectories, motion modes corresponding to the motion trajectories may be determined, and then the terminal is controlled according to the motion modes. It is also possible that the movement patterns of these movement trajectories are not determined, i.e. there is no movement pattern corresponding to these movement trajectories, in which case no control operation is performed on the terminal. Specifically, whether a motion mode corresponding to the motion trail exists is determined, if yes, the motion mode corresponding to the motion trail is determined, and the terminal is controlled and operated according to the motion mode. If not, the execution process is terminated.
In addition, in some cases, the terminal cannot recognize the motion trajectory and the motion mode, for example, when the memory of the terminal is insufficient, the system base library does not support, and the operating system does not support, the terminal cannot recognize the motion trajectory and the motion mode. In this case, in order to implement the control operation on the terminal, after step 101 is executed, it may be determined whether the terminal supports recognition of a motion trajectory and a motion pattern, if yes, step 102 is executed, and if no, the face image obtained in step 101 is sent to the server, so that the server executes steps 102 and 103, and sends the obtained motion pattern to the terminal.
And 104, controlling and operating the terminal according to the motion mode.
In the embodiment of the application, a plurality of face images captured at continuous time points are obtained, wherein the plurality of face images correspond to the same user; determining the motion track of the head according to the plurality of face images which are captured; determining a motion mode corresponding to the motion track; and controlling and operating the terminal according to the motion mode. It can be seen that even if the user is in a noisy environment and the two hands cannot slide on the terminal, the control operation can be performed on the terminal based on the motion track of the head.
Further, on the basis of the embodiment in fig. 1, the embodiment of the present application explains in detail the step "determining the motion trajectory of the head according to the plurality of face images captured" in fig. 1. As shown in fig. 2, the method includes:
step 201, identifying each face image, determining key points in each face image, and obtaining position coordinates of each key point on the corresponding face image.
It should be noted that, in the embodiment of the present application, for convenience of calculation, a coordinate system may be established on the face image, an origin of the coordinate system is at a lower left corner of the face image, and then a position coordinate (x, y) is used to represent an xth pixel in a yth line in the face image. For example, if the position of a certain pixel in the image is AxB, the position coordinates of the pixel are (a, B).
Step 202, determining a motion track of the key point moving along with time according to the position coordinate and the time point corresponding to each face image.
In a specific embodiment of this step, the position coordinates corresponding to each face image are sorted according to the time point corresponding to each face image, a motion trajectory of the key point moving along with time is obtained, and the motion trajectory is determined as the motion trajectory of the head.
Further, on the basis of the embodiment of fig. 2, the embodiment of the present application explains in detail the step "determining the motion mode corresponding to the motion trajectory" in fig. 1. As shown in fig. 3, the method includes:
step 301, judging whether the motion track is a track from bottom to top.
In a specific embodiment of this step, a precedence order corresponding to each keypoint is determined according to the time point of each keypoint in step 202, and according to the precedence order, the position coordinates of the keypoints constituting the motion trajectory are sequentially represented as (a 1, b 1), (a 2, b 2), (a 3, b 3), \8230; (an, bn). Where n represents the nth position coordinate in the motion trajectory. And judging whether the ordinate of the preset number of position coordinates in the position coordinates increases with the increase of time, if so, determining that the motion track is a track from bottom to top, and executing step 302. If not, determining that the motion track is not a bottom-to-top track, and not executing step 302.
Specifically, whether the corresponding motion trajectory is a bottom-to-top trajectory or not is judged by judging whether the ordinate in all the position coordinates increases with the increase of time or not. For example, whether the keypoints composing the trajectory satisfy bn > \8230 ≧ b3 > b2 > b1 is determined, and if yes, the motion trajectory is determined as a bottom-to-top trajectory, and step 302 is executed. If not, determining that the motion track is not a bottom-to-top track, and not executing step 302.
Step 302, if yes, determining a maximum abscissa, a maximum ordinate, a minimum abscissa and a minimum ordinate in the motion trajectory.
In a specific embodiment of this step, a maximum abscissa and a minimum abscissa are determined in the abscissa and a minimum ordinate and a maximum ordinate are determined in the ordinate, by a preset MAX function and MIN function.
Specifically, the maximum abscissa is MAX (a 1, a2, a3, \8230;, an), the minimum abscissa is MIN (a 1, a2, a3, \8230;, an,), the maximum ordinate is MAX (b 1, b2, b3, \8230;, bn,), and the minimum ordinate is MIN (b 1, b2, b3, \8230;, bn).
Step 303, detecting whether the difference between the maximum abscissa and the minimum abscissa is smaller than a first preset value, and whether the difference between the maximum ordinate and the minimum ordinate is larger than a second preset value.
Wherein the first preset value and the second preset value are real numbers larger than 0 set by a technician according to experience. In practice, since the variation of the ordinate is often larger than that of the abscissa during head-up, the second preset value may be preset to be larger than the first preset value.
And 304, when the difference between the maximum abscissa and the minimum abscissa is smaller than a first preset value and the difference between the maximum ordinate and the minimum ordinate is larger than a second preset value, determining that the motion mode corresponding to the motion track is a head-up mode.
It should be noted that, in the present application, it may also be determined whether the difference between the maximum abscissa and the minimum abscissa is smaller than a first preset value, and whether the difference between the maximum ordinate and the minimum ordinate is larger than a second preset value, and then it is determined whether the motion trajectory is a trajectory from bottom to top, or it may be determined simultaneously, and when both are satisfied, it is determined that the motion mode corresponding to the motion trajectory is the head-up mode.
Further, on the basis of the embodiment of fig. 2, the embodiment of the present application explains in detail the step "determining the motion mode corresponding to the motion trajectory" in fig. 1. As shown in fig. 4, the method includes:
step 401, judging whether the motion track is a track from top to bottom.
In a specific embodiment of this step, according to the time point of each key point in step 202, a precedence order corresponding to each key point is determined, and according to the precedence order, the position coordinates of the key points constituting the motion trajectory are sequentially represented as (a 1, b 1), (a 2, b 2), (a 3, b 3), \8230; (an, bn). Where n represents the nth position coordinate in the motion trajectory. And judging whether the ordinate of the preset number of position coordinates in the position coordinates is decreased progressively along with the increase of time, if so, determining that the motion track is a track from bottom to top, and executing step 402. If not, determining that the motion track is not a bottom-to-top track, and not executing step 402.
Specifically, whether the corresponding motion trajectory is a bottom-to-top trajectory or not is judged by judging whether the ordinate in all the position coordinates decreases with the increase of time or not. For example, whether the key points composing the track satisfy bn < \ 8230; < b3 < b2 < b1, if yes, the motion track is determined to be a bottom-to-top track, and step 402 is executed. If not, determining that the motion track is not a track from bottom to top, and not executing the step 402.
Step 402, if yes, determining a maximum abscissa, a maximum ordinate, a minimum abscissa and a minimum ordinate in the motion trajectory.
In a specific embodiment of this step, a maximum abscissa and a minimum abscissa are determined in the abscissa and a minimum ordinate and a maximum ordinate are determined in the ordinate by a preset MAX function and MIN function.
Specifically, the maximum abscissa is MAX (| a1|, | a2|, | a3|, \8230 |, | an |), and the minimum abscissa is MIN (| a1|, | a2|, | a3|, \8230;, | an |), the maximum ordinate is MAX (| b1|, | b2|, | b3|, \ 8230 |, | bn |), and the minimum ordinate is MIN (| b1|, | b2|, | b3|, \8230;, | bn |).
Step 403, detecting whether the difference between the maximum abscissa and the minimum abscissa is smaller than a third preset value, and whether the difference between the maximum ordinate and the minimum ordinate is larger than a fourth preset value.
Wherein the third preset value and the fourth preset value are real numbers larger than 0, which are set by a technician according to experience. In practice, since the variation of the ordinate is often larger than that of the abscissa during head-up, the fourth preset value may be preset to be larger than the third preset value.
It should be noted that the third preset value may be the same as or different from the first preset value in step 303. Similarly, the fourth predetermined value may be the same as or different from the second predetermined value in step 303.
And step 404, when the difference between the maximum abscissa and the minimum abscissa is smaller than a third preset value and the difference between the maximum ordinate and the minimum ordinate is larger than a fourth preset value, determining that the motion mode corresponding to the motion trajectory is the low head mode.
It should be noted that, in the present application, it may also be determined whether the difference between the maximum abscissa and the minimum abscissa is smaller than a third preset value, and whether the difference between the maximum ordinate and the minimum ordinate is larger than a fourth preset value, and then it is determined whether the motion trajectory is a top-down trajectory, or it may be determined simultaneously, and when both are satisfied, it is determined that the motion mode corresponding to the motion trajectory is the low head mode. Further, on the basis of the embodiment of fig. 2, the embodiment of the present application explains in detail step 103 "determining a motion mode corresponding to a motion trajectory" in fig. 1. As shown in fig. 5, the method includes:
step 501, judging whether the motion track is a track from right to left.
In a specific embodiment of this step, a precedence order corresponding to each keypoint is determined according to the time point of each keypoint in step 202, and according to the precedence order, the position coordinates of the keypoints constituting the motion trajectory are sequentially represented as (a 1, b 1), (a 2, b 2), (a 3, b 3), \8230; (an, bn). Where n represents the nth position coordinate in the motion trajectory. And judging whether the abscissa of the preset number of position coordinates in the position coordinates is decreased progressively along with the increase of time, if so, determining that the motion track is a track from right to left, and executing step 502. If not, determining that the motion track is not a right-to-left track, and not executing step 502.
Specifically, whether the corresponding motion trajectory is a right-to-left trajectory is determined by determining whether the abscissa of all the position coordinates decreases with time. For example, it is determined whether the key points constituting the trajectory satisfy an < \8230, a3 < a2 < a1, and if so, it is determined that the motion trajectory is a right-to-left trajectory, and step 502 is performed. If not, determining that the motion track is not a right-to-left track, and not executing the step 502.
Step 502, if yes, determining a maximum abscissa, a maximum ordinate, a minimum abscissa and a minimum ordinate in the motion trajectory.
Step 503, detecting whether the difference between the maximum abscissa and the minimum abscissa is greater than a fifth preset value, and whether the difference between the maximum ordinate and the minimum ordinate is less than a sixth preset value.
Wherein the fifth preset value and the sixth preset value are real numbers larger than 0 set by a technician according to experience. In practice, since the variation of the abscissa is often larger than that of the ordinate when the head is swung left, the fifth preset value may be preset to be larger than the sixth preset value.
Step 504, when the difference between the maximum abscissa and the minimum abscissa is greater than a fifth preset value and the difference between the maximum ordinate and the minimum ordinate is less than a sixth preset value, determining that the motion mode corresponding to the motion trajectory is the left-handed rotation mode.
It should be noted that, in the present application, it may also be determined whether the difference between the maximum abscissa and the minimum abscissa is greater than a fifth preset value, and whether the difference between the maximum ordinate and the minimum ordinate is less than a sixth preset value, and then it is determined whether the motion trajectory is a trajectory from right to left, or it may be determined simultaneously, and when both are satisfied, it is determined that the motion mode corresponding to the motion trajectory is the left yaw mode.
Further, on the basis of the embodiment of fig. 2, the embodiment of the present application explains step 103 "determining a motion mode corresponding to a motion trajectory" in fig. 1 in detail. As shown in fig. 6, the method includes:
step 601, judging whether the motion track is a track from left to right.
In a specific embodiment of this step, a precedence order corresponding to each keypoint is determined according to the time point of each keypoint in step 202, and according to the precedence order, the position coordinates of the keypoints constituting the motion trajectory are sequentially represented as (a 1, b 1), (a 2, b 2), (a 3, b 3), \8230; (an, bn). Where n represents the nth position coordinate in the motion trajectory. And judging whether the abscissa of the preset number of position coordinates in the position coordinates increases with the increase of time, if so, determining that the motion track is a track from left to right, and executing step 602. If not, it is determined that the motion trajectory is not a left-to-right trajectory, and step 602 is not performed.
Specifically, whether the corresponding motion trajectory is a left-to-right trajectory is judged by judging whether the abscissa in all the position coordinates increases with time. For example, whether the key points composing the trajectory satisfy an > \8230; > a3 > a2 > a1 is determined, if yes, the motion trajectory is determined to be a left-to-right trajectory, and step 602 is executed. If not, then the motion trajectory is determined not to be a left-to-right trajectory and step 602 is not performed.
Step 602, if yes, determining a maximum abscissa, a maximum ordinate, a minimum abscissa and a minimum ordinate in the motion trajectory.
Step 603, detecting whether the difference between the maximum abscissa and the minimum abscissa is larger than a seventh preset value, and whether the difference between the maximum ordinate and the minimum ordinate is smaller than an eighth preset value.
Step 604, when the difference between the maximum abscissa and the minimum abscissa is greater than a seventh preset value and the difference between the maximum ordinate and the minimum ordinate is less than an eighth preset value, determining that the motion mode corresponding to the motion trajectory is the right-swing mode.
It should be noted that, in the present application, it may also be determined whether a difference between the maximum abscissa and the minimum abscissa is greater than a seventh preset value, and whether a difference between the maximum ordinate and the minimum ordinate is less than an eighth preset value, and then it is determined whether the motion trajectory is a trajectory from left to right, or the determination may be performed simultaneously, and when both are satisfied, it is determined that the motion mode corresponding to the motion trajectory is the right yaw mode. In addition, the method further comprises a head left side semicircle marking mode, and when the motion trail meets the following two conditions, the motion mode corresponding to the motion trail is determined to be the head left side semicircle marking mode. (1) And determining position coordinates corresponding to the first key point and the last key point respectively in the motion trail, wherein the difference between the abscissa corresponding to the first key point and the abscissa corresponding to the last key point is smaller than a first value, and the difference between the ordinate corresponding to the first key point and the ordinate corresponding to the last key point is larger than a second value. (2) When the maximum displacement amount of the motion trail in the left horizontal direction is larger than half of the maximum displacement amount in the vertical direction, the shape of the motion trail is determined to be a left semicircle.
Specifically, the position coordinates of the key points constituting the motion trajectory are sequentially represented as (a 1, b 1), (a 2, b 2), (a 3, b 3), \8230; (an, bn). When an-a1 < P, bn-b1 > N and (a 1+ an)/2-MIN (a 1, a2, a3, \8230;, an) > (bn-b 1)/2, determining that the motion mode corresponding to the motion track is a head left-side semicircle marking mode. Wherein P represents a first value and N represents a second value.
It should be noted that, when the position coordinates of the key points constituting the motion trajectory satisfy the formula (a 1+ an)/2-MIN (a 1, a2, a3, \8230;, an) > (bn-b 1)/2, it is determined that the maximum displacement amount of the motion trajectory in the left-side horizontal direction is greater than half of the maximum displacement amount in the vertical direction.
The method further comprises a head right-side semicircle drawing mode, and when the motion trail meets the following two conditions, the motion mode corresponding to the motion trail is determined to be the head right-side semicircle drawing mode. (1) And determining coordinates corresponding to the first key point and the last key point respectively in the motion trail, wherein the difference between the abscissa corresponding to the first key point and the abscissa corresponding to the last key point is smaller than a first value, and the difference between the ordinate corresponding to the first key point and the ordinate corresponding to the last key point is larger than a second value. (2) When the maximum displacement amount of the motion trail in the right horizontal direction is detected to be larger than half of the maximum displacement amount in the vertical direction, the shape of the motion trail is determined to be a right semicircle.
Specifically, the position coordinates of the key points constituting the motion trajectory are sequentially represented as (a 1, b 1), (a 2, b 2), (a 3, b 3), \8230; (an, bn). When an-a1 < P, bn-b1 > N, and MAX (a 1, a2, a3, \ 8230;, an) - (a 1+ an)/2 > (bn-b 1)/2, determining the motion mode corresponding to the motion track as a head right-side semicircle drawing mode.
It should be noted that, when the coordinates of the key points constituting the motion trajectory satisfy the formula MAX (a 1, a2, a3, \8230;, an) - (a 1+ an)/2 > (bn-b 1)/2, it is determined that the maximum displacement amount of the motion trajectory in the right-side horizontal direction is greater than half of the maximum displacement amount in the vertical direction.
In addition, the method further comprises a head forward movement mode, when the head forward movement mode is identified, key points on the face image are the left eye and the right eye, and position coordinates corresponding to the left eye and the right eye respectively are obtained. After the position coordinates respectively corresponding to the left eye and the right eye in each face image are obtained, a first distance between the left eye and the right eye in the first face image, a second distance between the left eye and the right eye in the last face image and the maximum offset of the head in the longitudinal movement are obtained. And when the second distance is greater than the first distance, the difference between the second distance and the first distance is greater than a third value, and the maximum offset is less than a fourth value, determining that the motion mode of the motion track is a head forward motion mode.
The maximum offset of the head moving in the longitudinal direction can be represented by the maximum offset of the left eye moving in the longitudinal direction, and can also be represented by the maximum offset of the right eye moving in the longitudinal direction.
Specifically, the fourth numerical value in the present application may be the same as or different from the first numerical value. The fourth numerical value is the same as the first numerical value, and each frame of picture of the head moving image is sequentially read and converted into a picture, so that a plurality of face images are obtained. Using an image recognition technology to sequentially eliminate the background of each human face image, only keeping human face information, further using the human face recognition technology to identify five sense organs and keep two eye parts so as to obtain coordinates of a series of two eyes, wherein the coordinates of a left eye sequentially are as follows: [ (aL 1, bL 1), (aL 2, bL 2), (aL 3, bL 3), \ 8230; (aLn, bLn) ], the right eye coordinates are, in order: [ (aR 1, bR 1), (aR 2, bR 2), (aR 3, bR 3), \ 8230; (aRN, bRn) ]. When (aRn-aLn) - (aR 1-aL 1) > K, and MAX (bL 1, bL2, bL3, \8230;, bLn) -MIN (bL 1, bL2, bL3, \8230;, bLn) < P, it is determined that the motion pattern of the motion trajectory is a head-forward motion pattern. Wherein K is a fourth value and P is a fourth value.
In addition, the method further comprises a head backward movement mode, when the head backward movement mode is identified, key points on the face image are the left eye and the right eye, and coordinates corresponding to the left eye and the right eye respectively are obtained. After the coordinates corresponding to the left eye and the right eye in each face image are obtained, a first distance between the left eye and the right eye in the first face image, a second distance between the left eye and the right eye in the last face image and the maximum offset of the head in the longitudinal movement are obtained. And when the second distance is smaller than the first distance, the difference between the second distance and the first distance is larger than a third numerical value, and the maximum offset is smaller than a fourth numerical value, determining that the motion mode of the motion track is a head backward motion mode.
The maximum offset of the head moving in the longitudinal direction can be represented by the maximum offset of the left eye moving in the longitudinal direction, and can also be represented by the maximum offset of the right eye moving in the longitudinal direction.
Specifically, the fourth numerical value in the present application may be the same as or different from the first numerical value. The fourth numerical value is the same as the first numerical value, and each frame of picture of the head moving image is sequentially read and converted into a picture, so that a plurality of face images are obtained. Using image recognition technology to eliminate the background of each human face image in turn, only keeping human face information, further using human face recognition technology to identify five sense organs and keep two eyes, and further obtaining a series of coordinates of two eyes, wherein the coordinates of the left eye are as follows: [ (aL 1, bL 1), (aL 2, bL 2), (aL 3, bL 3), \ 8230; (aLn, bLn) ], the right eye coordinates are, in order: [ (aR 1, bR 1), (aR 2, bR 2), (aR 3, bR 3), \ 8230; (aRN, bRn) ]. When (aR 1-aL 1) - (aRn-aLn) > K, and MAX (bL 1, bL2, bL3, \8230;, blln) -MIN (bL 1, bL2, bL3, \8230;, blln) < P, it is determined that the motion pattern of the motion trajectory is a head-forward motion pattern. Wherein K is a fourth value and P is a fourth value.
Further, on the basis of the embodiment of fig. 1, the embodiment of the present application explains step 104 "control operation of the terminal according to the movement mode" in fig. 1 in detail. As shown in fig. 7, the method includes:
step 701, determining a control instruction for controlling the terminal according to the motion mode and the application program currently in use.
In a specific embodiment of this step, in order to enrich the usage modes of the motion modes, this step may be combined with an application program currently used by the user, so that different motion modes correspond to different control instructions in different application programs.
Specifically, when the application currently in use is novel software, if the motion mode is the pan left mode, the control instruction is an instruction for turning from the current page to the next page, and if the motion mode is the pan right mode, the control instruction is an instruction for turning from the current page to the previous page. When the application program currently in use is the information software, if the motion mode is the head-up mode, the control instruction is an instruction for sliding up the page, and if the motion mode is the head-down mode, the control instruction is an instruction for sliding down the page. When the application program currently in use is music software, if the motion mode is a left-hand swing mode, the control instruction is an instruction for turning down the volume, and if the motion mode is a right-hand swing mode, the control instruction is an instruction for turning up the volume. When the application program currently used is news software, if the motion mode is the head-up mode, the control instruction is an instruction for turning down characters, and if the motion mode is the head-down mode, the control instruction is an instruction for turning up characters.
And step 702, performing control operation on the terminal based on the control instruction.
In the application, whether a controlled object exists on a display page in an application currently in use can be further detected, and when the controlled object exists on the display page, the controlled object is controlled and operated based on a control instruction. When the controlled object does not exist on the display page, the control operation is not performed.
Further, as an implementation of the method embodiments shown in fig. 1 to 7, the embodiment of the present application provides an apparatus for controlling a terminal based on a motion trajectory of a head, where the apparatus may perform a control operation on the terminal based on the motion trajectory of the head. The embodiment of the apparatus corresponds to the foregoing method embodiment, and details in the foregoing method embodiment are not described in detail again in this embodiment for convenience of reading, but it should be clear that the apparatus in this embodiment can correspondingly implement all the contents in the foregoing method embodiment. As shown in fig. 8 in detail, the apparatus includes:
the acquiring unit 801 is configured to acquire a plurality of face images captured at continuous time points, where the plurality of face images correspond to the same user;
a first determining unit 802, configured to determine a motion trajectory of the head according to the multiple face images acquired by the acquiring unit 801;
a second determining unit 803, configured to determine a motion mode corresponding to the motion trajectory determined by the first determining unit 802;
a control unit 804, configured to perform a control operation on the terminal according to the motion mode determined by the second determining unit 803.
Further, as shown in fig. 9, the first determining unit 802 includes:
the recognition module 8021 is configured to recognize each face image, determine a key point in each face image, and obtain a position coordinate of each key point on the corresponding face image;
the first determining module 8022 is configured to determine, according to the position coordinate and the time point corresponding to each face image, a motion trajectory of the key point moving along with time.
Further, as shown in fig. 9, the second determining unit 803 includes:
a first judging module 8031, configured to judge whether the motion trajectory is a bottom-to-top trajectory;
a first determination result module 8032, configured to determine, if the result of the first determination module 8031 is yes, a maximum abscissa, a maximum ordinate, a minimum abscissa, and a minimum ordinate in the motion trajectory, and when a difference between the maximum abscissa and the minimum abscissa is smaller than a first preset value and a difference between the maximum ordinate and the minimum ordinate is greater than a second preset value, determine that the motion mode corresponding to the motion trajectory is the head-up mode.
Further, as shown in fig. 9, the second determining unit 803 is shown to include:
a second judging module 8033, configured to judge whether the motion trajectory is a top-to-bottom trajectory;
a second determination result module 8034, configured to determine, if the result of the first determination module 8033 is yes, a maximum abscissa, a maximum ordinate, a minimum abscissa, and a minimum ordinate in the motion trajectory, and when a difference between the maximum abscissa and the minimum abscissa is smaller than a first preset value and a difference between the maximum ordinate and the minimum ordinate is larger than a second preset value, determine that the motion mode corresponding to the motion trajectory is the low head mode.
Further, as shown in fig. 9, the second determining unit 803 includes:
a third determining module 8035, configured to determine whether the motion trajectory is a right-to-left trajectory;
a third determination result module 8036, configured to determine, if the result of the third determination module 8035 is yes, a maximum abscissa, a maximum ordinate, a minimum abscissa, and a minimum ordinate in the motion trajectory, and when a difference between the maximum abscissa and the minimum abscissa is greater than a third preset value and a difference between the maximum ordinate and the minimum ordinate is smaller than a fourth preset value, determine that the motion mode corresponding to the motion trajectory is the yaw mode.
Further, as shown in fig. 9, the second determining unit 803 includes:
a fourth judging module 8037, configured to judge whether the motion trajectory is a left-to-right trajectory;
a fourth determination result module 8038, configured to determine, if the result of the fourth determination module 8037 is yes, a maximum abscissa, a maximum ordinate, a minimum abscissa, and a minimum ordinate in the motion trajectory, and when a difference between the maximum abscissa and the minimum abscissa is greater than a third preset value and a difference between the maximum ordinate and the minimum ordinate is smaller than a fourth preset value, determine that the motion mode corresponding to the motion trajectory is the yaw mode.
Further, as shown in fig. 9, the control unit 804 includes:
a second determining module 8041, configured to determine a control instruction for controlling the terminal according to the motion mode and the application program currently being used;
a control module 8042, configured to perform a control operation on the terminal based on the control instruction determined by the second determining module 8041.
Further, an electronic device is provided in an embodiment of the present application, where the electronic device includes at least one processor, and at least one memory and a bus connected to the processor; the processor and the memory complete mutual communication through a bus; the processor is configured to call program instructions in the memory to perform the method for controlling the terminal based on the head movement trajectory described above with reference to fig. 1-7.
Further, an embodiment of the present application also provides a storage medium for storing a computer program, where the computer program when running controls a device on which the storage medium is located to execute the method for controlling a terminal based on a head movement trajectory described in fig. 1 to 7 above.
Fig. 10 is a block diagram of an apparatus 100 provided in an embodiment of the present application. The apparatus 100 comprises at least one processor 1001, and at least one memory 1002, a bus 1003 coupled to the processor 1001; the processor 1001 and the memory 1002 communicate with each other via a bus 1003. The processor 1001 is used to call program instructions in the memory 1002 to perform the method of controlling the terminal based on the head movement trajectory described above. The device in the present disclosure may be a server (e.g., a local server or a cloud server), a smart phone, a tablet computer, a PDA, a portable computer, or a fixed terminal such as a desktop computer.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It will be appreciated that the relevant features of the method and apparatus described above are referred to one another. In addition, "first", "second", and the like in the above embodiments are for distinguishing the embodiments, and do not represent merits of the embodiments.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system is apparent from the description above. In addition, this application is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present application as described herein, and any descriptions of specific languages are provided above to disclose the best mode of use of the present application.
In addition, the memory may include volatile memory in a computer readable medium, random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional identical elements in the process, method, article, or apparatus comprising the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A method for controlling a terminal based on a motion trail of a head, the method comprising:
acquiring a plurality of face images captured at continuous time points, wherein the plurality of face images correspond to the same user;
determining a motion track of the head according to the plurality of face images;
determining a motion mode corresponding to the motion track;
and controlling and operating the terminal according to the motion mode.
2. The method according to claim 1, wherein the obtaining of the motion trajectory of the head according to the plurality of face images comprises:
identifying each face image, determining key points in each face image, and obtaining the position coordinates of each key point on the corresponding face image;
and determining the motion track of the key point moving along with time according to the position coordinate and the time point corresponding to each face image.
3. The method of claim 1, wherein the determining the motion pattern corresponding to the motion trajectory comprises:
judging whether the motion track is a track from bottom to top;
if the motion mode is the head-up mode, determining a maximum abscissa, a maximum ordinate, a minimum abscissa and a minimum ordinate in the motion trail, and determining the motion mode corresponding to the motion trail to be the head-up mode when the difference between the maximum abscissa and the minimum abscissa is smaller than a first preset value and the difference between the maximum ordinate and the minimum ordinate is larger than a second preset value.
4. The method of claim 1, wherein the determining the motion pattern corresponding to the motion trajectory comprises:
judging whether the motion track is a track from top to bottom;
if so, determining a maximum abscissa, a maximum ordinate, a minimum abscissa and a minimum ordinate in the motion trail, and determining that the motion mode corresponding to the motion trail is the head lowering mode when the difference between the maximum abscissa and the minimum abscissa is smaller than a third preset value and the difference between the maximum ordinate and the minimum ordinate is larger than a fourth preset value.
5. The method of claim 1, wherein the determining the motion pattern corresponding to the motion trajectory comprises:
judging whether the motion track is a track from right to left;
if the motion mode is the left-swing mode, determining the maximum abscissa, the maximum ordinate, the minimum abscissa and the minimum ordinate in the motion trail, and when the difference between the maximum abscissa and the minimum abscissa is larger than a fifth preset value and the difference between the maximum ordinate and the minimum ordinate is smaller than a sixth preset value, determining the motion mode corresponding to the motion trail to be the left-swing mode.
6. The method of claim 1, wherein the determining the motion pattern corresponding to the motion trajectory comprises:
judging whether the motion track is from left to right;
if the motion mode is the right-swing head mode, determining the maximum abscissa, the maximum ordinate, the minimum abscissa and the minimum ordinate in the motion trail, and when the difference between the maximum abscissa and the minimum abscissa is larger than a seventh preset value and the difference between the maximum ordinate and the minimum ordinate is smaller than an eighth preset value, determining the motion mode corresponding to the motion trail to be the right-swing head mode.
7. The method according to claim 1, wherein the performing a control operation on the terminal according to the motion mode comprises:
determining a control instruction for controlling the terminal according to the motion mode and the application program currently used;
and performing control operation on the terminal based on the control instruction.
8. An apparatus for controlling a terminal based on a motion trajectory of a head, the apparatus comprising:
the system comprises an acquisition unit, a storage unit and a display unit, wherein the acquisition unit is used for acquiring a plurality of face images captured at continuous time points, and the plurality of face images correspond to the same user;
the first determining unit is used for determining the motion track of the head according to the plurality of face images acquired by the acquiring unit;
the second determining unit is used for determining a motion mode corresponding to the motion track determined by the first determining unit;
and the control unit is used for controlling and operating the terminal according to the motion mode determined by the second determination unit.
9. An electronic device, comprising at least one processor, and at least one memory connected to the processor, a bus; the processor and the memory complete mutual communication through a bus; a processor is operative to call program instructions in the memory to perform the method of controlling a terminal based on a head based motion trajectory of any one of claims 1 to 7.
10. A storage medium for storing a computer program, wherein the computer program controls an apparatus in which the storage medium is located to execute the method of controlling a terminal according to any one of claims 1 to 7 when executed.
CN202211118626.3A 2022-09-15 2022-09-15 Method and device for controlling terminal based on motion trail of head Pending CN115185381A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211118626.3A CN115185381A (en) 2022-09-15 2022-09-15 Method and device for controlling terminal based on motion trail of head

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211118626.3A CN115185381A (en) 2022-09-15 2022-09-15 Method and device for controlling terminal based on motion trail of head

Publications (1)

Publication Number Publication Date
CN115185381A true CN115185381A (en) 2022-10-14

Family

ID=83524698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211118626.3A Pending CN115185381A (en) 2022-09-15 2022-09-15 Method and device for controlling terminal based on motion trail of head

Country Status (1)

Country Link
CN (1) CN115185381A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104407694A (en) * 2014-10-29 2015-03-11 山东大学 Man-machine interaction method and device combining human face and gesture control
CN107333025A (en) * 2017-06-30 2017-11-07 北京金山安全软件有限公司 Image data processing method and device, electronic equipment and storage medium
CN109224437A (en) * 2018-08-28 2019-01-18 腾讯科技(深圳)有限公司 The exchange method and terminal and storage medium of a kind of application scenarios
CN113791411A (en) * 2021-09-07 2021-12-14 北京航空航天大学杭州创新研究院 Millimeter wave radar gesture recognition method and device based on trajectory judgment
CN114022514A (en) * 2021-11-02 2022-02-08 辽宁大学 Real-time sight line inference method integrating head posture and eyeball tracking

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104407694A (en) * 2014-10-29 2015-03-11 山东大学 Man-machine interaction method and device combining human face and gesture control
CN107333025A (en) * 2017-06-30 2017-11-07 北京金山安全软件有限公司 Image data processing method and device, electronic equipment and storage medium
CN109224437A (en) * 2018-08-28 2019-01-18 腾讯科技(深圳)有限公司 The exchange method and terminal and storage medium of a kind of application scenarios
CN113791411A (en) * 2021-09-07 2021-12-14 北京航空航天大学杭州创新研究院 Millimeter wave radar gesture recognition method and device based on trajectory judgment
CN114022514A (en) * 2021-11-02 2022-02-08 辽宁大学 Real-time sight line inference method integrating head posture and eyeball tracking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄腾等: "基于运动轨迹分析的头部行为识别", 《计算机工程》 *

Similar Documents

Publication Publication Date Title
CN109948542B (en) Gesture recognition method and device, electronic equipment and storage medium
CN106293074B (en) Emotion recognition method and mobile terminal
US10373359B2 (en) Method and device for erasing a writing path on an infrared electronic white board, and a system for writing on an infrared electronic white board
CN107102723B (en) Methods, apparatuses, devices, and non-transitory computer-readable media for gesture-based mobile interaction
EP3279866A1 (en) Method and apparatus for generating synthetic picture
CN109345553B (en) Palm and key point detection method and device thereof, and terminal equipment
CN110852257B (en) Method and device for detecting key points of human face and storage medium
WO2014127697A1 (en) Method and terminal for triggering application programs and application program functions
CN107273032A (en) Information composition method, device, equipment and computer-readable storage medium
CN114365075B (en) Method for selecting a graphical object and corresponding device
CN110796701B (en) Identification method, device and equipment of mark points and storage medium
CN113313083B (en) Text detection method and device
CN107450717B (en) Information processing method and wearable device
CN114391132A (en) Electronic equipment and screen capturing method thereof
CN110909596B (en) Side face recognition method, device, equipment and storage medium
CN108369486A (en) General inking is supported
CN115185381A (en) Method and device for controlling terminal based on motion trail of head
CN110069126B (en) Virtual object control method and device
CN111625297A (en) Application program display method, terminal and computer readable storage medium
CN107544743B (en) Method and device for adjusting characters and electronic equipment
JP6924544B2 (en) Cartoon data display system, method and program
CN113850238B (en) Document detection method and device, electronic equipment and storage medium
CN115661927A (en) Sign language recognition method and device, electronic equipment and storage medium
CN107977147A (en) Sliding trace display methods and device
CN108989681A (en) Panorama image generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20221014