CN111382691A - Screen content page turning method and mobile terminal - Google Patents

Screen content page turning method and mobile terminal Download PDF

Info

Publication number
CN111382691A
CN111382691A CN202010148558.XA CN202010148558A CN111382691A CN 111382691 A CN111382691 A CN 111382691A CN 202010148558 A CN202010148558 A CN 202010148558A CN 111382691 A CN111382691 A CN 111382691A
Authority
CN
China
Prior art keywords
user
page turning
screen
wrist
eyeball
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010148558.XA
Other languages
Chinese (zh)
Inventor
刘天象
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhenshi Information Technology Shanghai Co ltd
Original Assignee
Zhenshi Information Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhenshi Information Technology Shanghai Co ltd filed Critical Zhenshi Information Technology Shanghai Co ltd
Priority to CN202010148558.XA priority Critical patent/CN111382691A/en
Publication of CN111382691A publication Critical patent/CN111382691A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G04HOROLOGY
    • G04GELECTRONIC TIME-PIECES
    • G04G21/00Input or output devices integrated in time-pieces
    • G04G21/02Detectors of external physical values, e.g. temperature
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The method comprises the steps of obtaining a current user environment image in front of a screen of the mobile terminal and wrist action behavior data of a user using the mobile terminal; carrying out user eyeball identification and calculation on the current user environment image to obtain an eyeball position identification result of the user, and carrying out action identification on the wrist action behavior data to obtain the wrist action of the user; when the eyeball position recognition result of the user is matched with the wrist action, corresponding page turning operation is executed on the content displayed on the screen, namely whether the page turning operation is executed or not and what page turning operation is executed is judged by recognizing the wrist action of the user and combining the eyeball position recognition result, the corresponding page turning operation is executed on the content displayed on the screen of the mobile terminal more accurately under various use environments, the error judgment of the page turning operation is avoided, the accuracy and the practicability of the page turning operation recognition are improved, and therefore the use experience of the user is improved.

Description

Screen content page turning method and mobile terminal
Technical Field
The application relates to the field of computers, in particular to a screen content page turning method and a mobile terminal.
Background
In the prior art, the method is suitable for smart watches and similar small mobile terminals which display contents through screens. Along with the development of science and technology, smart watches are more and more popular. The smart watch usually has a display screen to display contents, and the contents of the screen are controlled to be turned through keys on the watch or a touch screen mode. On one hand, the smart watch is worn on one hand, so that the smart watch is generally operated by the other hand except the hand; on the other hand, the display screen of a smart watch is usually small and is often used in a mobile environment, and it is not very convenient for a user to control page turning through a touch slide gesture.
Therefore, methods for controlling content page turning based on other sensors are provided, for example, in the existing technical scheme, gravity acceleration values in three directions of the watch are obtained through a gravity acceleration sensor, and a wrist downward or upward turning action is recognized based on calculation of the gravity acceleration values, so that page turning is realized. However, the technical scheme is easy to operate by mistake, namely the intelligent watch worn on the wrist is often in a moving state when being used, the watch is in a shaking state, the change of gravity acceleration can be generated, the judgment is easy to generate misjudgment only by depending on the gravity acceleration value, and a user may not really need to turn pages to accurately execute page turning operation. Therefore, the method can be accurately identified only when the method is used under a static condition, and the method is not practical for users in actual use.
Disclosure of Invention
An object of the present application is to provide a method for turning a page of a screen content and a mobile terminal, so as to solve the problem that in the prior art, a page turning action of a screen display content of a mobile terminal is not accurately identified and is prone to generate misjudgment.
According to an aspect of the present application, there is provided a method of turning a page of screen contents, including:
acquiring a current user environment image in front of a screen of a mobile terminal and wrist action behavior data of a user using the mobile terminal;
carrying out user eyeball identification and calculation on the current user environment image to obtain an eyeball position identification result of the user, and carrying out action identification on the wrist action behavior data to obtain the wrist action of the user;
and when the eyeball position recognition result of the user is matched with the wrist action, performing corresponding page turning operation on the content displayed on the screen.
Further, in the method for turning pages of screen content, identifying and calculating eyeballs of the user on the current user environment image to obtain an eyeball position identification result of the user, and performing motion identification on the wrist motion behavior data to obtain the wrist motion of the user includes:
identifying and calculating eyeballs of the user on the current user environment image to obtain the focal position of the focal point of the eyeballs of the user on the screen;
judging whether the focus position is in a preset page turning area of the screen;
if yes, determining an eyeball position identification result for indicating that the focus position is in the preset page turning area, and starting accumulating the focusing time of the page turning area;
and when the focusing time of the page turning area is greater than or equal to a preset page turning time threshold, entering a wrist action recognition state, and performing action recognition on the wrist action behavior data to obtain the wrist action of the user.
Further, in the method for turning pages of screen content, the preset page turning region includes a preset page up turning region and a preset page down turning region, where when the eyeball position recognition result of the user matches the wrist action, a corresponding page turning operation is performed on the content displayed on the screen, including:
when the eyeball position recognition result of the user indicates that the focus position is in a preset page-up area of the screen and the wrist action indicates that the wrist rotates upwards, executing corresponding page-up operation on the content displayed on the screen;
and when the eyeball position recognition result of the user indicates that the focus position is in a preset page turning down area of the screen and the wrist action indicates that the wrist rotates downwards, executing corresponding page turning down operation on the content displayed on the screen.
Further, the method for turning pages of screen content further includes:
and when the accumulated page turning area focusing time is smaller than the preset page turning time threshold and the focus of the user eyeballs is separated from the preset page turning area, stopping accumulating and clearing the page turning area focusing time, and returning to the step of carrying out user eyeball identification and calculation on the current user environment image.
Further, the method for turning pages of screen content further includes:
when the current user environment image is in the wrist action recognition state, the eyeball of the user is continuously recognized and calculated for the current user environment image;
if the focus position of the eyeball of the user, which is continuously identified, on the screen is separated from the preset page turning area, accumulating the separation time of the page turning area;
and when the page turning area separation time is greater than or equal to a preset separation time threshold value, returning to the step of carrying out user eyeball identification and calculation on the current user environment image.
Further, the method for turning pages of screen content further includes:
and when the accumulated page turning area disengagement time is smaller than the preset disengagement time threshold value and the focus position of the focus of the eyeball of the user on the screen is identified to return to the preset page turning area, continuing to maintain the wrist action identification state, and simultaneously stopping accumulation and clearing the page turning area disengagement time.
Further, in the method for turning pages of screen content, the acquiring wrist action behavior data of the user using the mobile terminal includes:
acquiring wrist action behavior data of a user using the mobile terminal through an action acquisition sensor;
the performing motion recognition on the wrist motion behavior data to obtain the wrist motion of the user includes:
and performing gravity acceleration calculation and motion recognition on the wrist motion behavior data to obtain the wrist motion of the user.
According to another aspect of the present application, there is also provided a computer readable medium having computer readable instructions stored thereon, which, when executed by a processor, cause the processor to implement the method of any one of the above.
According to another aspect of the present application, there is also provided a mobile terminal for turning pages of screen contents, the mobile terminal including:
one or more processors;
a computer-readable medium for storing one or more computer-readable instructions,
when executed by the one or more processors, cause the one or more processors to implement a method as in any one of the above.
Compared with the prior art, the method and the device have the advantages that the current user environment image in front of the screen of the mobile terminal and the wrist action behavior data of the user using the mobile terminal are obtained; carrying out user eyeball identification and calculation on the current user environment image to obtain an eyeball position identification result of the user, and carrying out action identification on the wrist action behavior data to obtain the wrist action of the user; when the eyeball position recognition result of the user is matched with the wrist action, corresponding page turning operation is executed on the content displayed on the screen, namely whether the page turning operation is executed or not and what page turning operation is executed is judged by recognizing the wrist action of the user and combining the eyeball position recognition result, so that the corresponding page turning operation is executed on the content displayed on the screen of the mobile terminal more accurately under various use environments, the error judgment of the page turning operation is avoided, the accuracy and the practicability of the page turning operation recognition of the mobile terminal are improved, and the use experience of the user is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a flow diagram of a method of turning pages of screen content in accordance with an aspect of the subject application;
FIG. 2 illustrates a schematic diagram of a preset page turning region of a method of turning pages of screen content according to an aspect of the present application;
FIG. 3 is a flow diagram illustrating one practical application scenario of a method for turning pages of screen content according to one aspect of the present application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., Central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include volatile Memory in a computer readable medium, Random Access Memory (RAM), and/or nonvolatile Memory such as Read Only Memory (ROM) or flash Memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, Phase-Change RAM (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other Memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, magnetic cassette tape, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transmyedia), such as modulated data signals and carrier waves.
Fig. 1 shows a flow diagram of a method for turning a page of screen content according to an aspect of the present application, which is applicable to a mobile terminal including, but not limited to, a mobile phone, an IPad, a computer, a smart watch, a smart bracelet, an e-book, etc. The method comprises a step S11, a step S12 and a step S13, wherein the method specifically comprises the following steps:
and step S11, acquiring the current user environment image in front of the screen of the mobile terminal and the wrist action behavior data of the user using the mobile terminal. Here, the front-facing camera of the mobile terminal acquires, in real time, a current user environment image in front of a screen of the mobile terminal (for example, a front side, and the like), where the environment image includes a face image of a user, so that real-time monitoring of the user using the mobile terminal is achieved, and whether the user has a desire to perform a page-turning operation on the mobile terminal is identified more quickly, so that the next step S12 is entered quickly, and the speed of performing the page-turning operation by the mobile terminal is increased.
Step S12, performing user eyeball identification and calculation on the current user environment image to obtain an eyeball position identification result of the user, and performing motion identification on the wrist motion behavior data to obtain the wrist motion of the user.
And step S13, when the eyeball position recognition result of the user is matched with the wrist action, performing corresponding page turning operation on the content displayed on the screen.
The above steps S11 to S13 are performed by acquiring the current user environment image in front of the screen of the mobile terminal and the wrist action behavior data of the user using the mobile terminal; carrying out user eyeball identification and calculation on the current user environment image to obtain an eyeball position identification result of the user, and carrying out action identification on the wrist action behavior data to obtain the wrist action of the user; when the eyeball position recognition result of the user is matched with the wrist action, corresponding page turning operation is executed on the content displayed by the screen, namely the page turning operation which needs to be executed on the content displayed by the screen of the mobile terminal by the user and what page turning operation is executed are determined by recognizing the wrist action of the user and combining the eyeball position recognition result, so that the corresponding page turning operation is more accurately executed on the content displayed by the screen of the mobile terminal under various use environments, the error judgment of the page turning operation caused when the page turning operation is determined by only utilizing the wrist action in the prior art is avoided, the accuracy and the practicability of the page turning operation recognition of the mobile terminal are improved, and the use experience of the user is improved.
In a preferred embodiment of the present application, the method for turning a page of screen content of the present application may be applied to a sports bracelet, the sports bracelet is worn on a left wrist of a user, and first, a current user environment image P (including a face image of the user) in front of a screen of the sports bracelet and wrist action behavior data (i.e., wrist action behavior data of a right hand) of the user using the sports bracelet are acquired; then, carrying out user eyeball identification and calculation on the current user environment image P to obtain an eyeball position identification result of the user, and carrying out action identification on the wrist action behavior data to obtain the wrist action of the user; finally, when the result of the matching between the eyeball position identification result of the user and the wrist action is the corresponding page-up operation, executing the corresponding page-up operation on the content displayed on the screen, or when the result of the matching between the eyeball position identification result of the user and the wrist action is the corresponding page-down operation, executing the corresponding page-down operation on the content displayed on the screen, or when the result of the matching between the eyeball position identification result of the user and the wrist action is the corresponding page-left operation, executing the corresponding page-left operation on the content displayed on the screen, or when the result of the matching between the eyeball position identification result of the user and the wrist action is the corresponding page-right operation, executing the corresponding page-right operation on the content displayed on the screen, thereby realizing the combination of the eyeball position identification result based on the user and the wrist action, the corresponding page turning operation which needs to be executed on the content displayed on the screen is determined, and the accuracy of the corresponding page turning operation which needs to be executed on the content displayed on the screen is further improved, so that the experience degree of a user is improved.
Next to the above embodiment of the present application, in step S12, performing user eyeball identification and calculation on the current user environment image to obtain an eyeball position identification result of the user, and performing motion identification on the wrist motion behavior data to obtain the wrist motion of the user, includes:
and identifying and calculating eyeballs of the user on the current user environment image to obtain the focal position of the focal point of the eyeballs of the user on the screen so as to judge and match the page turning requirement of the user in the following.
Judging whether the focus position is in a preset page turning area of the screen; here, the preset page turning area may be set by a programmer of the mobile terminal, may be set by a user using the mobile terminal according to a use condition of the user, or may be a factory default setting of the mobile terminal; the preset page turning region may be a partial region above or below the screen, or a partial region on the left or right side of the screen, and is not limited to a specific shape or size.
If yes, determining an eyeball position identification result for indicating that the focus position is in the preset page turning area, and starting accumulating the focusing time of the page turning area;
and when the accumulated focusing time of the page turning area is greater than or equal to a preset page turning time threshold, entering a wrist action recognition state, and performing action recognition on the wrist action behavior data to obtain the wrist action of the user. Namely, when the time that the eyeball focus position of the user falls in the preset page turning area and is continuously focused reaches the preset page turning time threshold, the wrist action recognition state is entered, and the problem that the user mistakenly starts to recognize the wrist motion state without the page turning requirement and generates a page turning misjudgment result in the process of using the mobile terminal (especially in the motion state) is avoided.
For example, recognizing and calculating eyeballs of a user on a current user environment image P to obtain a focal position F of a focal point of the eyeballs of the user on the screen, wherein if the focal position F is in a preset page turning area of the screen, an eyeball position recognition result indicating that the focal position F is in the preset page turning area is determined, and accumulated page turning area focusing time t1 is started; when the focusing time T1 of the page turning area is greater than or equal to a preset page turning time threshold value T1, entering a wrist action recognition state, performing action recognition on the wrist action behavior data to obtain the wrist action of the user, and avoiding the phenomenon that the user mistakenly starts to recognize the wrist action state without the requirement of page turning and generates a page turning misjudgment result in the process of using the mobile terminal (especially in the motion state).
The present application further provides a preferred embodiment, where the preset page turning region includes a preset page up turning region and a preset page down turning region, where when the eyeball position recognition result of the user matches the wrist action, a corresponding page turning operation is performed on the content displayed on the screen, including:
when the eyeball position recognition result of the user indicates that the focus position is in a preset page-up area of the screen and the wrist action indicates that the wrist rotates upwards, executing corresponding page-up operation on the content displayed on the screen;
and when the eyeball position recognition result of the user indicates that the focus position is in a preset page turning down area of the screen and the wrist action indicates that the wrist rotates downwards, executing corresponding page turning down operation on the content displayed on the screen.
Here, when eyeball position recognition result matches with wrist action, the corresponding page turning operation is executed to the content displayed by the screen, misjudgment of the page turning operation is avoided, the corresponding page turning operation is executed more accurately to the content displayed by the screen of the mobile terminal under different use environments, accuracy and practicability of the page turning operation recognition of the mobile terminal are improved, and therefore use experience of users is improved.
For example, as shown in fig. 2, the preset page turning areas include a preset page up area M1 and a preset page down area M2; when the eyeball position recognition result of the user indicates that the focus position F is in a preset page-up area M1 of the screen and the recognized wrist action is matched to be recognized that the wrist rotates upwards, executing corresponding page-up operation on the content displayed on the screen; when the eyeball position recognition result of the user indicates that the focus position F is located in the preset page-up area M1 of the screen and the recognized wrist action is matched with other actions, the other actions are regarded as invalid actions, namely the focus position F is located in the preset page-up area M1 of the screen and is not matched with the wrist actions, no page-turning operation is executed on the content displayed by the screen, the misjudgment of the page-turning operation is avoided, the accuracy and the practicability of the page-turning operation recognition of the mobile terminal are improved, and the use experience of the user is improved.
For another example, when the eyeball position recognition result of the user indicates that the focus position F is in the preset page turning-down region M2 of the screen and the recognized wrist action is recognized as wrist rotation, performing a corresponding page turning-down operation on the content displayed on the screen; when the eyeball position recognition result of the user indicates that the focus position F is located in the preset page turning down region M2 of the screen and the recognized wrist action is matched with other actions, the other actions are regarded as invalid actions, namely the focus position F is located in the preset page turning down region M2 of the screen and is not matched with the wrist actions, no page turning operation is executed on the content displayed by the screen, misjudgment of page turning operation is avoided, the accuracy and the practicability of page turning operation recognition of the mobile terminal are improved, and therefore the use experience of the user is improved.
For another example, the preset page turning regions include a preset left page turning region N1 and a preset right page turning region N2; when the eyeball position recognition result of the user indicates that the focus position F is in a preset page left turning area N1 of the screen and the recognized wrist action is matched to be recognized as wrist rotation to the left, executing corresponding page left turning operation on the content displayed on the screen; when the eyeball position recognition result of the user indicates that the focus position F is located in the preset left page turning region N1 of the screen and the recognized wrist action is matched with other actions, the other actions are regarded as invalid actions, namely the focus position F is located in the preset left page turning region N1 of the screen and is not matched with the wrist actions, no page turning operation is executed on the content displayed by the screen, the misjudgment of the page turning operation is avoided, the accuracy and the practicability of the page turning operation recognition of the mobile terminal are improved, and the use experience of the user is improved.
For another example, the preset page turning regions include a preset left page turning region N1 and a preset right page turning region N2; when the eyeball position recognition result of the user indicates that the focus position F is located in a preset right page turning region N2 of the screen and the recognized wrist action is matched to be recognized as wrist rotation to the right, executing corresponding right page turning operation on the content displayed on the screen; when the eyeball position recognition result of the user indicates that the focus position F is located in the preset right page turning region N2 of the screen and the recognized wrist action is matched with other actions, the other actions are regarded as invalid actions, namely the focus position F is located in the preset right page turning region N2 of the screen and is not matched with the wrist actions, no page turning operation is executed on the content displayed by the screen, misjudgment of page turning operation is avoided, the accuracy and the practicability of page turning operation recognition of the mobile terminal are improved, and therefore the use experience of the user is improved.
The present application further provides a preferred embodiment, the method further comprising:
and when the accumulated page turning area focusing time is smaller than the preset page turning time threshold and the focus of the user eyeballs is separated from the preset page turning area, stopping accumulating and clearing the page turning area focusing time, and returning to the step of carrying out user eyeball identification and calculation on the current user environment image.
For example, when the focal position of the eyeball of the user is in the preset page turning region, the time that the focal position is in the preset page turning region starts to be accumulated, when the accumulated page turning region focusing time T1 is smaller than the preset page turning time threshold T1, and the focal position of the eyeball of the user on the screen is deviated from the preset page turning region, the accumulated page turning region focusing time T1 is stopped, all accumulated operations and records of the time that the focal position is in the preset page turning region are cleared, and the step S12 is returned to execute the step of performing user eyeball identification and calculation on the current user environment image, that is, before the page turning region focusing time T1 does not reach the preset page turning time threshold T1, the user may generate a demand for canceling or changing the page turning operation or a demand that the user is currently in a motion state and does not turn pages, and then, the accumulation and the zero clearing of the focusing time t1 of the page turning area are stopped, so that the resource consumption caused by accumulation of the focusing time of the page turning area is avoided, the identification of wrong page turning behavior can be quitted as soon as possible, and the environment identification state according with the current page turning requirement of the user can be adjusted as soon as possible.
Following the above preferred embodiment of the present application, the method further comprises:
when the current user environment image is in the wrist action recognition state, the eyeball of the user is continuously recognized and calculated for the current user environment image;
if the focus position of the eyeball of the user, which is continuously identified, on the screen is separated from the preset page turning area, accumulating the separation time of the page turning area;
and when the page turning area separation time is greater than or equal to a preset separation time threshold value, returning to the step of carrying out user eyeball identification and calculation on the current user environment image.
For example, when the mobile terminal is in the wrist action recognition state, the eyeball of the user is continuously recognized and calculated for the current environment image of the user; if the focus position of the continuously identified eyeball of the user on the screen is separated from the preset page turning region M1, starting to accumulate the time when the focus position is separated from the preset page turning region, namely accumulating the page turning region separation time t 2; when the page turning region separation time T2 is greater than or equal to the preset separation time threshold T2, the step is used for indicating that the user does not have a requirement for page turning, and the step returns to the step S12 to execute the step of performing user eyeball identification and calculation on the current user environment image, so that not only is the misoperation caused by long-time wrist motion identification state avoided, but also the mobile terminal can quit from the long-time misoperation identification state as soon as possible, and the mobile terminal is adjusted to the environment identification state meeting the current page turning requirement of the user by identifying the current user environment image of the mobile terminal as soon as possible.
For another example, when the mobile terminal is in the wrist motion recognition state, the eyeball of the user is continuously recognized and calculated for the current user environment image; if the focus position of the continuously identified eyeball of the user on the screen is separated from the preset page turning-down area M2, starting to accumulate the time when the focus position is separated from the preset page turning area, namely accumulating the time when the focus position is separated from the preset page turning area t 2; when the page turning region detachment time T2 is greater than or equal to the preset detachment time threshold T2, the step is used for indicating that the user has no need of turning down the page, and the step returns to the step S12 to execute the step of performing user eyeball identification and calculation on the current user environment image, so that not only is the misoperation caused by long-time wrist motion identification state avoided, but also the mobile terminal can quit from the long-time misoperation identification state as soon as possible, and the mobile terminal can be adjusted to the environment identification state meeting the current page turning need of the user by identifying the current user environment image of the mobile terminal as soon as possible.
Following the above preferred embodiment of the present application, the method further comprises:
and when the accumulated page turning area disengagement time is smaller than the preset disengagement time threshold value and the focus position of the focus of the eyeball of the user on the screen is identified to return to the preset page turning area, continuing to maintain the wrist action identification state, and simultaneously stopping accumulation and clearing the page turning area disengagement time.
For example, when the mobile terminal is in the wrist action recognition state, the eyeball of the user is continuously recognized and calculated for the current environment image of the user; if the focus position of the continuously identified eyeball of the user on the screen is separated from the preset page turning region M1, starting to accumulate the time when the focus position is separated from the preset page turning region, namely accumulating the page turning region separation time t 2; when the accumulated page turning region separation time T2 is smaller than the preset separation time threshold T2, and the focal position F of the focal point of the eyeball of the user on the screen is identified to return to the preset page turning region M1, it is considered that the user has a demand for page turning, the wrist action identification state is continuously maintained, and meanwhile, accumulation and zero clearing of the page turning region separation time T2 are stopped, so that not only is the resource consumption caused by accumulation of the page turning region separation time T2 continuously performed avoided, but also the wrist can be returned to the state of wrist action identification, and the page turning operation meeting the current page turning demand of the user is matched as soon as possible.
For another example, when the mobile terminal is in the wrist motion recognition state, the eyeball of the user is continuously recognized and calculated for the current user environment image; if the focus position of the continuously identified eyeball of the user on the screen is separated from the preset page turning region M1, starting to accumulate the time when the focus position is separated from the preset page turning region, namely accumulating the page turning region separation time t 2; when the accumulated page turning region separation time T2 is smaller than the preset separation time threshold T2, and the focal position F of the focal point of the eyeball of the user on the screen is identified to return to the preset page turning region M2, the user is considered to have a demand for page turning, the wrist action identification state is continuously maintained, and meanwhile, accumulation and zero clearing of the page turning region separation time T2 are stopped, so that not only is resource consumption caused by accumulation of the page turning region separation time T2 continuously performed avoided, but also the wrist can be returned to the state of wrist action identification, and page turning operation meeting the current page turning demand of the user can be matched as soon as possible.
Next to the foregoing embodiment of the present application, the acquiring wrist action behavior data of the user using the mobile terminal in step S11 specifically includes:
acquiring wrist action behavior data of a user using the mobile terminal through an action acquisition sensor; here, the motion capture sensor includes, but is not limited to, a gravity sensor, an acceleration sensor, an inertial sensor, and the like.
Performing motion recognition on the wrist motion behavior data in step S12 to obtain the wrist motion of the user, specifically including:
and performing gravity acceleration calculation and motion recognition on the wrist motion behavior data to obtain the wrist motion of the user.
For example, in a preferred embodiment of the present application, if the motion acquisition sensor adopts a gravity sensor and an acceleration sensor, the gravity sensor and the acceleration sensor are used to acquire gravity acceleration values in three directions of a sports bracelet, that is, data of wrist motion behavior when a user uses the mobile terminal; and then, identifying a downward or upward wrist turning action by calculating the gravity acceleration value to obtain the wrist action of the user, so that the user can operate the mobile terminal through the wrist action.
As shown in fig. 3, in an actual application scenario of the method for turning pages of screen content of a mobile terminal, a front camera of the mobile terminal is started, a current user environment image is collected in real time, and an initial state D1 is entered;
calculating the current user environment image acquired by the camera to identify the eyeball of the user, calculating the change of the eyeball to obtain the focal position F of the focal point of the eyeball of the user on the screen, and judging whether the focal position F falls in the preset page-up area M1 or the preset page-down area M2 of the screen.
If the focus of the eyeball falls in the preset page turning area M1 of the screen, the time (namely the page turning area focusing time) of the action G1 of the accumulated focus position F in the preset page turning area M1 of the screen is started, the accumulated page turning area focusing time exceeds T1, the user is judged to have the requirement of page turning, then the wrist action recognition state D21 is entered, the accumulated page turning area focusing time T1 does not reach T1, the focus position F of the eyeball is separated from the preset page turning area M1, the accumulated page turning area focusing time T1 is stopped, the initial state D1 is exited, and the page turning area focusing time accumulated counting action G1 is cleared.
If the focus of the eyeball falls in the preset page turning down region M2 of the screen, the time (namely the page turning region focusing time) of the action G2 of the accumulated focus position F in the preset page turning down region M2 of the screen is started, the accumulated page turning region focusing time exceeds T1, if the user is judged to have a demand for page turning, the wrist action recognition state D22 is entered, the accumulated page turning region focusing time T1 does not reach T1, the focus position F of the eyeball is separated from the preset page turning down region M2, the accumulated page turning region focusing time T1 is stopped, the initial state D1 is exited, and the page turning zero clearing region focusing time accumulation counting action G2 is performed.
And after entering the wrist action recognition state D21 or D22, acquiring values of the mobile terminal in three gravity acceleration directions for calculation, and recognizing wrist action behavior data.
If the wrist is in the state D21 and the upward rotation action of the wrist is recognized, the content of the control screen is turned upwards, other actions of the wrist are taken as ineffective actions, and the page turning action is not generated. If the wrist is in the state D22 and the wrist is recognized to have a downward rotation action, the content of the control screen is turned down, and other actions of the wrist are invalid actions, so that page turning actions are not generated.
Meanwhile, when the wrist motion recognition state D21 is in, the recognition and calculation of the focus of the eyeball are continued. If the focus position F is deviated from the preset page-up area M1, the accumulated time of the focus position F deviating from the action G3 of the preset page-up area M1 (i.e. the page-turning area deviation time) is started, the accumulated page-turning area deviation time reaches the preset deviation time threshold T2, the user is considered to have no need of turning up the page, and the state is returned to the initial state D1, if the preset deviation time threshold T2 is not reached, the eyeball focus position F is recognized to return to the preset page-up area M1, the wrist action recognition state D21 is continuously maintained, and the page-turning area deviation time accumulated count action G3 is cleared.
Correspondingly, when the wrist movement recognition state D22 is in, the recognition and calculation of the focus of the eyeball are continuously maintained. If the focus position F is deviated from the preset page turning-down region M2, the accumulated counted time of the focus position F deviating from the action G4 of the preset page turning-down region M2 (i.e. the page turning-down region deviation time) is started, and the accumulated page turning-down region deviation time reaches the preset deviation time threshold T2, then the user is considered to have no need of turning down the page, and the state is returned to the initial state D1, if the preset deviation time threshold T2 is not reached, and the eyeball focus position F is recognized to return to the preset page turning-down region M2, the wrist action recognition state D22 is continuously maintained, and the page turning-down region deviation time accumulated count action G4 is cleared. The method and the device have the advantages that the corresponding page turning operation can be accurately executed on the content displayed on the screen of the mobile terminal under various use environments, the page turning operation misjudgment is avoided, the accuracy and the practicability of the page turning operation identification of the mobile terminal are improved, and accordingly the use experience of a user is improved.
According to another aspect of the present application, there is also provided a computer readable medium having stored thereon computer readable instructions, which, when executed by a processor, cause the processor to implement the method of controlling user base alignment as described above.
According to another aspect of the present application, there is also provided a mobile terminal for turning pages of screen contents, comprising:
one or more processors;
a computer-readable medium for storing one or more computer-readable instructions,
when executed by the one or more processors, cause the one or more processors to implement a method of controlling user base station on a device as described above.
Here, for details of each embodiment of the device, reference may be specifically made to corresponding parts of the embodiment of the method for controlling user base pairing at the device side, and details are not described here.
In summary, the method includes the steps that a current user environment image in front of a screen of the mobile terminal and wrist action behavior data of the mobile terminal used by a user are obtained; carrying out user eyeball identification and calculation on the current user environment image to obtain an eyeball position identification result of the user, and carrying out action identification on the wrist action behavior data to obtain the wrist action of the user; when the eyeball position recognition result of the user is matched with the wrist action, corresponding page turning operation is executed on the content displayed on the screen, namely whether the page turning operation is executed or not and what page turning operation is executed is judged by recognizing the wrist action of the user and combining the eyeball position recognition result, so that the corresponding page turning operation is executed on the content displayed on the screen of the mobile terminal more accurately under various use environments, the error judgment of the page turning operation is avoided, the accuracy and the practicability of the page turning operation recognition of the mobile terminal are improved, and the use experience of the user is improved.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Program instructions which invoke the methods of the present application may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (9)

1. A method for turning pages of screen contents is applied to a mobile terminal, and is characterized in that the method comprises the following steps:
acquiring a current user environment image in front of a screen of a mobile terminal and wrist action behavior data of a user using the mobile terminal;
carrying out user eyeball identification and calculation on the current user environment image to obtain an eyeball position identification result of the user, and carrying out action identification on the wrist action behavior data to obtain the wrist action of the user;
and when the eyeball position recognition result of the user is matched with the wrist action, performing corresponding page turning operation on the content displayed on the screen.
2. The method according to claim 1, wherein the performing user eyeball identification and calculation on the current user environment image to obtain an eyeball position identification result of the user, and performing motion identification on the wrist motion behavior data to obtain the wrist motion of the user comprises:
identifying and calculating eyeballs of the user on the current user environment image to obtain the focal position of the focal point of the eyeballs of the user on the screen;
judging whether the focus position is in a preset page turning area of the screen;
if yes, determining an eyeball position identification result for indicating that the focus position is in the preset page turning area, and starting accumulating the focusing time of the page turning area;
and when the focusing time of the page turning area is greater than or equal to a preset page turning time threshold, entering a wrist action recognition state, and performing action recognition on the wrist action behavior data to obtain the wrist action of the user.
3. The method according to claim 2, wherein the preset page turning region comprises a preset page up turning region and a preset page down turning region, and wherein when the eyeball position recognition result of the user matches the wrist action, performing a corresponding page turning operation on the content displayed on the screen comprises:
when the eyeball position recognition result of the user indicates that the focus position is in a preset page-up area of the screen and the wrist action indicates that the wrist rotates upwards, executing corresponding page-up operation on the content displayed on the screen;
and when the eyeball position recognition result of the user indicates that the focus position is in a preset page turning down area of the screen and the wrist action indicates that the wrist rotates downwards, executing corresponding page turning down operation on the content displayed on the screen.
4. The method of claim 2 or 3, wherein the method further comprises:
and when the accumulated page turning area focusing time is smaller than the preset page turning time threshold and the focus of the user eyeballs is separated from the preset page turning area, stopping accumulating and clearing the page turning area focusing time, and returning to the step of carrying out user eyeball identification and calculation on the current user environment image.
5. The method of claim 2 or 3, wherein the method further comprises:
when the current user environment image is in the wrist action recognition state, the eyeball of the user is continuously recognized and calculated for the current user environment image;
if the focus position of the eyeball of the user, which is continuously identified, on the screen is separated from the preset page turning area, accumulating the separation time of the page turning area;
and when the page turning area separation time is greater than or equal to a preset separation time threshold value, returning to the step of carrying out user eyeball identification and calculation on the current user environment image.
6. The method of claim 5, wherein the method further comprises:
and when the accumulated page turning area disengagement time is smaller than the preset disengagement time threshold value and the focus position of the focus of the eyeball of the user on the screen is identified to return to the preset page turning area, continuing to maintain the wrist action identification state, and simultaneously stopping accumulation and clearing the page turning area disengagement time.
7. The method according to any one of claims 1 to 6, wherein the obtaining wrist action behavior data of the user using the mobile terminal comprises:
acquiring wrist action behavior data of a user using the mobile terminal through an action acquisition sensor;
performing motion recognition on the wrist motion behavior data to obtain the wrist motion of the user, including:
and performing gravity acceleration calculation and motion recognition on the wrist motion behavior data to obtain the wrist motion of the user.
8. A computer readable medium having computer readable instructions stored thereon, which, when executed by a processor, cause the processor to implement the method of any one of claims 1 to 7.
9. A mobile terminal for turning pages of screen contents, the mobile terminal comprising:
one or more processors;
a computer-readable medium storing one or more computer-readable instructions that, when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
CN202010148558.XA 2020-03-05 2020-03-05 Screen content page turning method and mobile terminal Pending CN111382691A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010148558.XA CN111382691A (en) 2020-03-05 2020-03-05 Screen content page turning method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010148558.XA CN111382691A (en) 2020-03-05 2020-03-05 Screen content page turning method and mobile terminal

Publications (1)

Publication Number Publication Date
CN111382691A true CN111382691A (en) 2020-07-07

Family

ID=71218696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010148558.XA Pending CN111382691A (en) 2020-03-05 2020-03-05 Screen content page turning method and mobile terminal

Country Status (1)

Country Link
CN (1) CN111382691A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112286358A (en) * 2020-11-02 2021-01-29 恒大新能源汽车投资控股集团有限公司 Screen operation method and device, electronic equipment and computer-readable storage medium
CN112506337A (en) * 2020-11-10 2021-03-16 深圳市有方科技股份有限公司 Operation request processing method and device, computer equipment and storage medium
CN114615394A (en) * 2022-03-07 2022-06-10 云知声智能科技股份有限公司 Word extraction method and device, electronic equipment and storage medium
CN115050133A (en) * 2022-05-31 2022-09-13 山东亚华电子股份有限公司 Dynamic data display method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103197755A (en) * 2012-01-04 2013-07-10 中国移动通信集团公司 Page turning method, device and terminal
CN103365417A (en) * 2013-06-20 2013-10-23 天津市莱科信息技术有限公司 Mobile terminal and electronic book page turning method based on same
CN104090649A (en) * 2014-05-20 2014-10-08 上海翰临电子科技有限公司 Intelligent watchband and operating control method thereof
CN105988574A (en) * 2015-02-16 2016-10-05 阿里巴巴集团控股有限公司 Display control method for intelligent wearable device and intelligent wearable device
CN106339069A (en) * 2015-07-08 2017-01-18 阿里巴巴集团控股有限公司 Screen processing method and device
CN107357430A (en) * 2017-07-13 2017-11-17 湖南海翼电子商务股份有限公司 The method and apparatus of automatic record reading position
CN109343707A (en) * 2018-11-07 2019-02-15 圣才电子书(武汉)有限公司 The control method and device of e-book reading
CN109858958A (en) * 2019-01-17 2019-06-07 深圳壹账通智能科技有限公司 Aim client orientation method, apparatus, equipment and storage medium based on micro- expression
CN110570200A (en) * 2019-08-16 2019-12-13 阿里巴巴集团控股有限公司 payment method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103197755A (en) * 2012-01-04 2013-07-10 中国移动通信集团公司 Page turning method, device and terminal
CN103365417A (en) * 2013-06-20 2013-10-23 天津市莱科信息技术有限公司 Mobile terminal and electronic book page turning method based on same
CN104090649A (en) * 2014-05-20 2014-10-08 上海翰临电子科技有限公司 Intelligent watchband and operating control method thereof
CN105988574A (en) * 2015-02-16 2016-10-05 阿里巴巴集团控股有限公司 Display control method for intelligent wearable device and intelligent wearable device
CN106339069A (en) * 2015-07-08 2017-01-18 阿里巴巴集团控股有限公司 Screen processing method and device
CN107357430A (en) * 2017-07-13 2017-11-17 湖南海翼电子商务股份有限公司 The method and apparatus of automatic record reading position
CN109343707A (en) * 2018-11-07 2019-02-15 圣才电子书(武汉)有限公司 The control method and device of e-book reading
CN109858958A (en) * 2019-01-17 2019-06-07 深圳壹账通智能科技有限公司 Aim client orientation method, apparatus, equipment and storage medium based on micro- expression
CN110570200A (en) * 2019-08-16 2019-12-13 阿里巴巴集团控股有限公司 payment method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谭思明等: "《智能电视关键技术专利分析》", 31 August 2015 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112286358A (en) * 2020-11-02 2021-01-29 恒大新能源汽车投资控股集团有限公司 Screen operation method and device, electronic equipment and computer-readable storage medium
CN112506337A (en) * 2020-11-10 2021-03-16 深圳市有方科技股份有限公司 Operation request processing method and device, computer equipment and storage medium
CN112506337B (en) * 2020-11-10 2024-04-12 东莞有方物联网科技有限公司 Operation request processing method, device, computer equipment and storage medium
CN114615394A (en) * 2022-03-07 2022-06-10 云知声智能科技股份有限公司 Word extraction method and device, electronic equipment and storage medium
CN115050133A (en) * 2022-05-31 2022-09-13 山东亚华电子股份有限公司 Dynamic data display method and device
CN115050133B (en) * 2022-05-31 2024-01-16 山东亚华电子股份有限公司 Dynamic data display method and device

Similar Documents

Publication Publication Date Title
CN111382691A (en) Screen content page turning method and mobile terminal
JP7407856B2 (en) Efficient image analysis using environmental sensor data
US10410046B2 (en) Face location tracking method, apparatus, and electronic device
US20170192500A1 (en) Method and electronic device for controlling terminal according to eye action
EP2864932B1 (en) Fingertip location for gesture input
CN102346859B (en) Character recognition device
US9880640B2 (en) Multi-dimensional interface
US8373654B2 (en) Image based motion gesture recognition method and system thereof
US9235278B1 (en) Machine-learning based tap detection
US20140157209A1 (en) System and method for detecting gestures
JP5665140B2 (en) Input device, input method, and program
US20130182898A1 (en) Image processing device, method thereof, and program
CN102906671A (en) Gesture input device and gesture input method
US11263634B2 (en) Payment method and device
EP3109797A1 (en) Method for recognising handwriting on a physical surface
US9400575B1 (en) Finger detection for element selection
US20240061516A1 (en) Local perspective method and device of virtual reality equipment and virtual reality equipment
US9148537B1 (en) Facial cues as commands
US9041689B1 (en) Estimating fingertip position using image analysis
CN106547339B (en) Control method and device of computer equipment
CN111757156B (en) Video playing method, device and equipment
US10082936B1 (en) Handedness determinations for electronic devices
CN113282167B (en) Interaction method and device of head-mounted display equipment and head-mounted display equipment
CN115097928A (en) Gesture control method and device, electronic equipment and storage medium
CN113835664A (en) Information processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200707