CN113986009A - Display state control method and device, computer equipment and storage medium - Google Patents

Display state control method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113986009A
CN113986009A CN202111254850.0A CN202111254850A CN113986009A CN 113986009 A CN113986009 A CN 113986009A CN 202111254850 A CN202111254850 A CN 202111254850A CN 113986009 A CN113986009 A CN 113986009A
Authority
CN
China
Prior art keywords
target
displaying
target object
prompt information
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111254850.0A
Other languages
Chinese (zh)
Inventor
黄婕
刘珈邑
宋晓彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202111254850.0A priority Critical patent/CN113986009A/en
Publication of CN113986009A publication Critical patent/CN113986009A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure provides a display state control method, apparatus, computer device and storage medium, wherein the method comprises: in the process of watching the target content, responding to the condition that a target user side meets a trigger condition, and displaying a popup window corresponding to a target page; responding to the trigger operation aiming at the popup window, displaying the target page, and displaying a target object in an initial state on the target page; displaying action prompt information for updating the display state of the target object, and acquiring a user image; and determining the eye movement of a target user based on the user image, and displaying a target animation for updating the display state of the target object when detecting that the eye movement meets the action requirement of the action prompt information.

Description

Display state control method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a display state control method and apparatus, a computer device, and a storage medium.
Background
During work, study, and entertainment, users often watch electronic screens for a long time, causing eye strain.
In the related art, it is common that fatigue of eyes is relieved by a user who voluntarily performs an eye exercises or drops eyedrops. However, the process of doing eye exercises is boring, so that the user is difficult to insist on completing the whole set of actions, and the eye drops can cause certain damage to conjunctival goblet cells in eyes, and are not suitable for multiple use.
Disclosure of Invention
The embodiment of the disclosure at least provides a display state control method, a display state control device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a display state control method, including:
in the process of watching the target content, responding to the condition that a target user side meets a trigger condition, and displaying a popup window corresponding to a target page;
responding to the trigger operation aiming at the popup window, displaying the target page, and displaying a target object in an initial state on the target page;
displaying action prompt information for updating the display state of the target object, and acquiring a user image;
and determining the eye movement of a target user based on the user image, and displaying a target animation for updating the display state of the target object when detecting that the eye movement meets the action requirement of the action prompt information.
In a possible embodiment, the trigger condition includes: the watching time length exceeds the preset time length, and/or the number of the watching contents exceeds the preset number;
the target user side is a user side in an online classroom.
In a possible embodiment, presenting the action prompt information for updating the presentation status of the target object includes:
displaying action prompt information at a first preset position of the target page, wherein the action prompt information is at least one of image information, video information and text information; alternatively, the first and second electrodes may be,
and playing voice action prompt information.
In a possible embodiment, the presenting the action prompt information for updating the presentation status of the target object includes:
and displaying action prompt information matched with the current state of the target object, wherein different action prompt information is used for indicating the user to make different eye actions.
In one possible embodiment, after presenting the target animation, the method further comprises:
and returning to the step of executing the action prompt information display until the updated display state of the target object is the target state.
In a possible implementation, in a case that it is detected that the eye motion does not satisfy the motion requirement of the motion cue information, the method further includes:
and returning to the step of displaying the action prompt information.
In a possible embodiment, the presenting the target object in the initial state on the target page includes:
acquiring attribute information of a user side, and determining the target object matched with the attribute information of the user side;
and displaying the target object in an initial state on the target page.
In a possible embodiment, the presenting the action prompt information for updating the presentation status of the target object includes:
acquiring browsing information of the target user side and attribute information of the target user;
determining target action prompt information based on the browsing information and the attribute information;
and displaying the target action prompt information.
In a possible embodiment, the method further comprises:
responding to a page switching instruction, and jumping to a historical page;
acquiring a historical target object corresponding to the user side and a display state of the historical target object; the historical target object is an object displayed in the target page in a historical display process, and the display state of the historical target object is the final state of the historical target object in a single display process;
and respectively displaying the historical target objects according to the display states of the historical target objects.
In a second aspect, an embodiment of the present disclosure further provides a display state control device, including:
the first display module is used for responding to the condition that a target user side meets a trigger condition in the process of watching the target content and displaying a popup window corresponding to a target page;
the second display module is used for responding to the trigger operation aiming at the popup window, displaying the target page and displaying the target object in the initial state on the target page;
the acquisition module is used for displaying action prompt information used for updating the display state of the target object and acquiring a user image;
and the third display module is used for determining the eye movement of the target user based on the user image and displaying the target animation used for updating the display state of the target object under the condition that the eye movement is detected to meet the action requirement of the action prompt information.
In a possible embodiment, the trigger condition includes: the watching time length exceeds the preset time length, and/or the number of the watching contents exceeds the preset number;
the target user side is a user side in an online classroom.
In a possible implementation manner, the obtaining module, when presenting the action prompt information for updating the presentation status of the target object, is configured to:
displaying action prompt information at a first preset position of the target page, wherein the action prompt information is at least one of image information, video information and text information; alternatively, the first and second electrodes may be,
and playing voice action prompt information.
In a possible implementation manner, the obtaining module, when presenting the action prompt information for updating the presentation status of the target object, is configured to:
and displaying action prompt information matched with the current state of the target object, wherein different action prompt information is used for indicating the user to make different eye actions.
In a possible implementation manner, the third presentation module, after presenting the target animation, is further configured to:
and returning to the step of executing the action prompt information display until the updated display state of the target object is the target state.
In a possible implementation manner, the third presentation module, in a case that it is detected that the eye movement does not satisfy the movement requirement of the movement prompt information, is further configured to:
and returning to the step of displaying the action prompt information.
In a possible implementation manner, the second presentation module, when presenting the target object in the initial state on the target page, is configured to:
acquiring attribute information of a user side, and determining the target object matched with the attribute information of the user side;
and displaying the target object in an initial state on the target page.
In a possible implementation manner, the obtaining module, when presenting the action prompt information for updating the presentation status of the target object, is configured to:
acquiring browsing information of the target user side and attribute information of the target user;
determining target action prompt information based on the browsing information and the attribute information;
and displaying the target action prompt information.
In a possible implementation manner, the third display module is further configured to:
responding to a page switching instruction, and jumping to a historical page;
acquiring a historical target object corresponding to the user side and a display state of the historical target object; the historical target object is an object displayed in the target page in a historical display process, and the display state of the historical target object is the final state of the historical target object in a single display process;
and respectively displaying the historical target objects according to the display states of the historical target objects.
In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
According to the display state control method, the display state control device, the computer equipment and the storage medium provided by the embodiment of the disclosure, firstly, in the process of watching the target content, after the target user side meets the trigger condition, a popup window corresponding to a target page is displayed, so that a target user who watches an electronic screen for too long time is automatically reminded to perform eye actions (such as doing eye exercises); then, responding to the trigger operation aiming at the popup window, displaying the target page, and displaying a target object in an initial state and action prompt information for updating the display state of the target object on the target page, so as to guide a target user to execute eye action; after the user image is obtained, the eye action of the target user is determined based on the user image, so that the aim of automatically detecting whether the target user executes the eye action is fulfilled; and finally, under the condition that the eye action meets the action requirement of the action prompt information, displaying the target animation for updating the display state of the target object, so that the interest of the target user in executing the eye action can be increased, and the user can be better guided to carry out eye protection.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of a presentation state control method provided by an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a specific method for displaying a target animation and displaying prompt information of an action in a display state control method provided by an embodiment of the present disclosure;
fig. 3 is a schematic diagram illustrating an architecture of a presentation state control apparatus according to an embodiment of the present disclosure;
fig. 4 shows a schematic structural diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
During work, study, and entertainment, users often watch electronic screens for a long time, causing eye strain.
In the related art, it is common that fatigue of eyes is relieved by a user who voluntarily performs an eye exercises or drops eyedrops. However, the process of doing eye exercises is boring, so that the user is difficult to insist on completing the whole set of actions, and the eye drops can cause certain damage to conjunctival goblet cells in eyes, and are not suitable for multiple use.
Based on the research, the present disclosure provides a display state control method, apparatus, computer device and storage medium, wherein first, in the process of viewing target content, a popup window corresponding to a target page may be displayed after a target user side meets a trigger condition, so as to automatically remind a target user viewing an electronic screen for an excessively long time to perform an eye action (e.g., perform an eye exercise); then, responding to the trigger operation aiming at the popup window, displaying the target page, and displaying a target object in an initial state and action prompt information for updating the display state of the target object on the target page, so as to guide a target user to execute eye action; after the user image is obtained, the eye action of the target user is determined based on the user image, so that the aim of automatically detecting whether the target user executes the eye action is fulfilled; and finally, under the condition that the eye action meets the action requirement of the action prompt information, displaying the target animation for updating the display state of the target object, so that the interest of the target user in executing the eye action can be increased, and the user can be better guided to carry out eye protection.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, a detailed description is given of a display state control method disclosed in an embodiment of the present disclosure, where an execution subject of the display state control method provided in the embodiment of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the presentation state control method may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a display state control method provided in the embodiment of the present disclosure is shown, where the method includes steps 101 to 104, where:
step 101, in the process of watching target content, responding to that a target user side meets a trigger condition, and displaying a popup window corresponding to a target page;
102, responding to the trigger operation aiming at the popup window, displaying the target page, and displaying a target object in an initial state on the target page;
103, displaying action prompt information for updating the display state of the target object, and acquiring a user image;
and step 104, determining the eye movement of the target user based on the user image, and displaying the target animation for updating the display state of the target object when detecting that the eye movement meets the action requirement of the action prompt information.
The following is a detailed description of the above steps:
for step 101,
In a possible application scenario, the process of viewing the target content may be, for example, a process of a student performing an online course, or a process of a user viewing a video, a novel, or a live broadcast.
In a possible embodiment, the triggering condition may include that the viewing duration (which refers to the total viewing duration of a single time) exceeds a preset duration, and/or that the number of viewing contents exceeds a preset number. In a possible embodiment, the target user terminal may be a user terminal in an online classroom.
For example, if the target user side is a user side in an online classroom, the trigger condition may be that the watching time of the target user on the live teaching broadcast, the teaching courseware and the teaching video exceeds 20 minutes, or the number of the watched teaching videos exceeds 3.
In a possible implementation manner, when determining whether the target user side meets the trigger condition, it is required to first obtain browsing information of the target user side, and determine whether the target user side meets the trigger condition based on the browsing information; the browsing information at least includes viewing content and browsing time, where the browsing time may include a start browsing time and an end browsing time of each target content, and the start browsing time and the end browsing time are respectively used to represent a start viewing time and an end viewing time of any viewing content.
Specifically, if the triggering condition is that the viewing duration is greater than a preset duration, the target user side needs to determine the total viewing duration based on the browsing time; and if the triggering condition is that the number of the watching contents exceeds a preset number, the target user side needs to count the number of the watching contents.
For example, when the total viewing duration is determined based on the browsing time, the total viewing duration may be obtained by subtracting the browsing start time of the earliest played viewing content from the current time, or an interval duration (e.g., 30 minutes) may be set, and if there is no browsing information in the interval duration, the total viewing duration may be obtained by subtracting the browsing start time of the first viewing content after the interval duration from the current time; in determining the number of the viewing contents, the number of viewing contents viewed on the day may be counted.
By adopting the method, whether the eyes of the target user start to be fatigued or not can be accurately judged, so that whether the target user needs to start to execute the eye action or not is determined.
In a possible implementation manner, when a popup corresponding to a target page is displayed, the popup may display a reminding message for the target user to take a rest and/or perform an eye action. For example, the reminder message may be "you spend too long on eyes to play a small game to relieve the tiredness of eyes! "
With respect to step 102,
In a possible implementation manner, the popup can be linked to the target page, or any icon or any text information on the popup can be linked to the target page, so that the popup can display the target page after responding to a triggering operation for the popup or any icon or any text information; wherein the triggering operation may include: any one of a single click, a double click, a long press, and a slide.
For example, a circular start icon may be displayed on the pop-up window, and the target page may be displayed after the target user clicks the start icon.
In a possible implementation manner, the target page includes a first preset position and a second preset position, the first preset position is used for displaying the action prompt information, and the second preset position is used for displaying the target object.
Here, it should be noted that if the motion prompt information is at least one of image information, video information and text information, the motion prompt information may be displayed at the first preset position, and if the motion prompt information is the voice motion prompt information, the display is not required.
In a possible implementation manner, when the target object in the initial state is displayed on the target page, a still picture of the target object in the initial state may be displayed, and a dynamic video of the target object in the initial state may also be displayed. For example, if the target object is a seedling, the static picture may be a cartoon of the seedling, and the dynamic video may be an animation of leaf flapping of the seedling.
In a possible implementation manner, when a target object in an initial state is displayed on the target page, attribute information of a user side may be obtained first, and the target object matched with the attribute information of the user side is determined; and then displaying the target object in the initial state on the target page. Wherein the attribute information may be the age, sex, etc. of the target user, the target object matched with a girl of 5 years old may be a bunny rabbit, and the target object matched with a girl of 8 years old may be a little tiger.
Specifically, a plurality of different target objects in the initial state may be stored in a database or a server in advance, and after determining a target object matching the attribute information, the target object matching the attribute information in the initial state may be acquired from the database or the server.
In one possible implementation, in determining the attribute information of the user side, the attribute information may be determined based on browsing information, search information, and the like of the target user.
For step 103,
Here, the target object is an object having a plurality of the display states, which may be an initial state, a plurality of intermediate states (e.g., a first intermediate state, a second intermediate state, a third intermediate state … …), and a final state. For example, if the target object is a tree, the initial state may be a seedling, the plurality of intermediate states may be small trees lower than a preset height, and the final state may be a large tree reaching the preset height.
In a possible implementation manner, when the action prompt information used for updating the display state of the target object is displayed, the action prompt information matched with the current state of the target object is displayed, and different action prompt information is used for indicating the user to make different eye actions.
Specifically, the action prompt information of the target object in different display states may be preset, for example, the initial state of the target object corresponds to a diagram a (diagram a is the action prompt information), and the first intermediate state of the target image corresponds to a diagram B (diagram B is the action prompt information).
In one possible embodiment, the eye movements may include blinking, closing eyes, rotation of the eyeball, i.e. movement of the focus of interest, alternating closing of the left and right eyes, etc.; the movement of the focus point may include left movement, right movement, upward movement, and downward movement of the focus point.
In a possible implementation manner, when displaying action prompt information for updating the display state of the target object, the action prompt information may be displayed at a first preset position of the target page, where the action prompt information is at least one of image information, video information, and text information; alternatively, voice action prompts may be played directly.
For example, if the motion prompt message requires blinking of the motion, the motion prompt message may include open and closed eye pictures, blinking video, and "please blink! "and the voice action prompt message may be an audio playing" please blink ".
For example, if the action request is the movement of the point of interest, the action prompt message may include a text message of "please follow the indicator icon to rotate the eyeball", and a moving indicator icon for guiding the direction of the movement of the point of interest of the target user. By adopting the method, the target user can exercise the eye muscles while obtaining interest.
In a possible implementation manner, when the target image which is completely clear by the target user is not obtained, the target user can be prompted to adjust the relative position of the camera and the eyes. Illustratively, when the target user is detected to be too far away from the camera, a pop-up "please get eyes close to the camera"; when the image is unclear, the target user is prompted to "please hold the device steady" or "please move to a place with better light".
With respect to step 104,
Wherein the action requirement may include an action type, an execution number, an execution time, and the like.
For example, if the target object is an apple tree, the target animation for updating the display state of the target object may be a target animation of a large apple tree grown as a result from an apple seedling.
In a possible implementation manner, when the eye movement is detected to meet the movement requirement of the movement prompt information, a quartic voice, a quartic text, an operation animation for executing a target operation on the target object, and a special effect can be displayed.
For example, if the target object is an apple tree, the quartic voice may be voice information of "your true stick", the quartic text may be text information of "congratulating you succeeded", the operation animation may be watering the target object, and the demo special effect may be a firework special effect.
In one possible application scenario, to alleviate eye strain of a user, a set of eye movement schemes typically includes a plurality of eye movements, each of which requires multiple executions.
Therefore, in a possible implementation manner, after the target animation is shown, the step of displaying the action prompt information may be returned to be executed until the updated display state of the target object is the target state.
Specifically, the method shown in fig. 2 includes steps 201 to 204:
step 201, displaying action prompt information for updating the display state of the target object, and acquiring a user image;
step 202, determining the eye motion of a target user based on the user image;
step 203, judging whether the eye movement meets the movement requirement of the movement prompt information;
if yes, go to step 204 sequentially;
if not, returning to execute the step 201;
here, returning to execute step 201, the displayed action prompt information is the action prompt information related to the updated display state of the target object;
step 204, displaying a target animation for updating the display state of the target object;
step 205, judging whether the updated display state of the target object is a target state;
if yes, go to step 206;
if not, returning to execute the step 201;
here, returning to execute step 201, the displayed action prompt information is the action prompt information related to the current display state of the target object, that is, the action prompt information corresponding to the eye action currently performed by the user;
and step 206, displaying next action prompt information for updating the display state of the target object.
Illustratively, if the target object is in the first intermediate state, the motion prompt message indicates that the target user blinks, and after confirming that the target user completes the blinking motion through the target image, the target object is still in the first intermediate state, the motion prompt message is still displayed to indicate that the target user blinks; and when the target object is changed into the second intermediate state, displaying next action prompt information to indicate that the attention point of the target user moves left.
In a possible implementation manner, when the eye movement is detected not to meet the movement requirement of the movement prompt information, the step of displaying the movement prompt information is returned to be executed.
For the above example, the method shown in fig. 2 may include steps 201 to 206, which are not described herein again.
For example, if the action prompt message is 'please start blinking', that is, the action request is blinking, and when the target user is detected not to blink, the action prompt message 'please start blinking' is re-displayed.
In a possible implementation manner, in a case that it is detected that the eye movement does not meet the movement requirement of the movement prompt information, before returning to the step of displaying the movement prompt information, displaying failure prompt information and/or playing failure voice prompt information; the failure prompt message may be at least one of image information, video information and text information, and for example, the failure prompt message may be "please do this action again".
In a possible application scenario, the viewing time lengths of different target users are different, and the rest time required by the eyes of the target users of different ages is also different, so that different eye movement schemes should be formulated for different target users.
Here, in the different eye movement schemes, it may be exemplified that the number of times any one eye movement is performed is different, and/or the number of kinds of eye movements included in any one of the eye movement schemes is different.
Therefore, in a possible implementation manner, when the action prompt information for updating the display state of the target object is displayed, the browsing information of the target user side and the attribute information of the target user may be obtained first; determining target action prompt information based on the browsing information and the attribute information; and displaying the target action prompt message. Wherein, the attribute information may exemplarily include an age of the target user.
Specifically, the target action prompting information is determined from action prompting information in the eye movement scheme by determining that the eye movement scheme of the target user is matched in multiple preset eye movement schemes through the browsing information and the attribute information.
Illustratively, for a target user whose age is lower than a preset standard age or whose viewing duration is longer than a preset standard time, the a scheme with a larger number of executions (e.g. 10 times) is adopted, and for a target user whose age is higher than a preset standard age and whose viewing duration is shorter than a preset standard time, the B scheme with a smaller number of executions (e.g. 5 times) is adopted.
In one possible implementation, the method can also respond to a page switching instruction and jump to a historical page; acquiring a historical target object corresponding to the user side and a display state of the historical target object; then respectively displaying the historical target objects according to the display states of the historical target objects; the page switching instruction may be, for example, a click target button, the history page is used to display the history target object, the history target object is an object displayed by the target page in a history display process, and a display state of the history target object is a final state of the history target object in a single display process.
Specifically, a third preset position is set in the history page, the third preset position is used for displaying a plurality of history target objects, and a display direction and a display sequence of the plurality of history target objects may also be set, and the display sequence may be exemplarily sorted according to history display time.
Illustratively, after the target user clicks the target button, the history page is skipped to, if the history target objects are an object a, an object B and an object C respectively, the history display time corresponding to the history target object is 9 months 1 days, 9 months 4 days and 9 months 2 days respectively, the display direction is from left to right, the object a, the object C and the object B are sequentially displayed from left to right, and if the object a is an apple tree, the final state of the displayed object a is a resulting large apple tree.
In a possible implementation manner, history pages of other target users can be viewed in response to a selection instruction, for example, a buddy list can be set, and the history page of any target user in the buddy list can be viewed by selecting a user name of the target user. By adopting the method, the social contact and the interest are increased, and the target user can be stimulated to insist on performing eye movement.
The display state control method provided by the embodiment of the disclosure includes that firstly, in the process of watching target content, after a target user side meets a trigger condition, a popup window corresponding to a target page is displayed, so that a target user watching an electronic screen for too long time is automatically reminded of performing eye actions (such as doing eye exercises); then, responding to the trigger operation aiming at the popup window, displaying the target page, and displaying a target object in an initial state and action prompt information for updating the display state of the target object on the target page, so as to guide a target user to execute eye action; after the user image is obtained, the eye action of the target user is determined based on the user image, so that the aim of automatically detecting whether the target user executes the eye action is fulfilled; and finally, under the condition that the eye action meets the action requirement of the action prompt information, displaying the target animation for updating the display state of the target object, so that the interest of the target user in executing the eye action can be increased, and the user can be better guided to carry out eye protection.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a display state control device corresponding to the display state control method, and since the principle of solving the problem of the device in the embodiment of the present disclosure is similar to that of the display state control method in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 3, a schematic diagram of an architecture of a display state control apparatus according to an embodiment of the present disclosure is shown, where the apparatus includes: a first display module 301, a second display module 302, an obtaining module 303, and a third display module 304; wherein the content of the first and second substances,
the first display module 301 is configured to, in a process of viewing target content, display a popup corresponding to a target page in response to a target user side meeting a trigger condition;
a second display module 302, configured to display the target page in response to a trigger operation for the popup window, and display a target object in an initial state on the target page;
an obtaining module 303, configured to display an action prompt message for updating a display state of the target object, and obtain a user image;
a third presentation module 304, configured to determine an eye movement of a target user based on the user image, and present a target animation for updating a presentation state of the target object when it is detected that the eye movement meets an action requirement of the action prompt information.
In a possible embodiment, the trigger condition includes: the watching time length exceeds the preset time length, and/or the number of the watching contents exceeds the preset number;
the target user side is a user side in an online classroom.
In a possible implementation manner, the obtaining module 303, when presenting the action prompt message for updating the presentation status of the target object, is configured to:
displaying action prompt information at a first preset position of the target page, wherein the action prompt information is at least one of image information, video information and text information; alternatively, the first and second electrodes may be,
and playing voice action prompt information.
In a possible implementation manner, the obtaining module 303, when presenting the action prompt message for updating the presentation status of the target object, is configured to:
and displaying action prompt information matched with the current state of the target object, wherein different action prompt information is used for indicating the user to make different eye actions.
In one possible implementation, the third presentation module 304, after presenting the target animation, is further configured to:
and returning to the step of executing the action prompt information display until the updated display state of the target object is the target state.
In a possible implementation manner, the third presenting module 304, in case that it is detected that the eye motion does not satisfy the motion requirement of the motion prompt information, is further configured to:
and returning to the step of displaying the action prompt information.
In a possible implementation manner, the second presentation module 302, when presenting the target object in the initial state on the target page, is configured to:
acquiring attribute information of a user side, and determining the target object matched with the attribute information of the user side;
and displaying the target object in an initial state on the target page.
In a possible implementation manner, the obtaining module 303, when presenting the action prompt message for updating the presentation status of the target object, is configured to:
acquiring browsing information of the target user side and attribute information of the target user;
determining target action prompt information based on the browsing information and the attribute information;
and displaying the target action prompt information.
In a possible implementation manner, the third display module 304 is further configured to:
responding to a page switching instruction, and jumping to a historical page;
acquiring a historical target object corresponding to the user side and a display state of the historical target object; the historical target object is an object displayed in the target page in a historical display process, and the display state of the historical target object is the final state of the historical target object in a single display process;
and respectively displaying the historical target objects according to the display states of the historical target objects.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 4, a schematic structural diagram of a computer device 400 provided in the embodiment of the present disclosure includes a processor 401, a memory 402, and a bus 403. The memory 402 is used for storing execution instructions and includes a memory 4021 and an external memory 4022; the memory 4021 is also referred to as an internal memory, and is configured to temporarily store operation data in the processor 401 and data exchanged with an external memory 4022 such as a hard disk, the processor 401 exchanges data with the external memory 4022 through the memory 4021, and when the computer device 400 operates, the processor 401 communicates with the memory 402 through the bus 403, so that the processor 401 executes the following instructions:
in the process of watching the target content, responding to the condition that a target user side meets a trigger condition, and displaying a popup window corresponding to a target page;
responding to the trigger operation aiming at the popup window, displaying the target page, and displaying a target object in an initial state on the target page;
displaying action prompt information for updating the display state of the target object, and acquiring a user image;
and determining the eye movement of a target user based on the user image, and displaying a target animation for updating the display state of the target object when detecting that the eye movement meets the action requirement of the action prompt information.
In a possible implementation manner, in the instructions executed by the processor 401, the triggering condition includes: the watching time length exceeds the preset time length, and/or the number of the watching contents exceeds the preset number;
the target user side is a user side in an online classroom.
In a possible implementation manner, the presenting, in the instructions executed by the processor 401, the action prompt information for updating the presentation state of the target object includes:
displaying action prompt information at a first preset position of the target page, wherein the action prompt information is at least one of image information, video information and text information; alternatively, the first and second electrodes may be,
and playing voice action prompt information.
In a possible implementation manner, the presenting, in the instructions executed by the processor 401, the action prompt information for updating the presentation state of the target object includes:
and displaying action prompt information matched with the current state of the target object, wherein different action prompt information is used for indicating the user to make different eye actions.
In a possible implementation manner, in the instructions executed by the processor 401, after the target animation is shown, the method further includes:
and returning to the step of executing the action prompt information display until the updated display state of the target object is the target state.
In a possible implementation manner, in the case that the processor 401 executes instructions, and it is detected that the eye motion does not satisfy the motion requirement of the motion prompt information, the method further includes:
and returning to the step of displaying the action prompt information.
In a possible implementation manner, in the instructions executed by the processor 401, the presenting the target object in the initial state on the target page includes:
acquiring attribute information of a user side, and determining the target object matched with the attribute information of the user side;
and displaying the target object in an initial state on the target page.
In a possible implementation manner, the presenting, in the instructions executed by the processor 401, the action prompt information for updating the presentation state of the target object includes:
acquiring browsing information of the target user side and attribute information of the target user;
determining target action prompt information based on the browsing information and the attribute information;
and displaying the target action prompt information.
In a possible implementation manner, in the instructions executed by the processor 401, the method further includes:
responding to a page switching instruction, and jumping to a historical page;
acquiring a historical target object corresponding to the user side and a display state of the historical target object; the historical target object is an object displayed in the target page in a historical display process, and the display state of the historical target object is the final state of the historical target object in a single display process;
and respectively displaying the historical target objects according to the display states of the historical target objects.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the display state control method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the display state control method in the foregoing method embodiments, which may be referred to specifically in the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A display state control method is characterized by comprising the following steps:
in the process of watching the target content, responding to the condition that a target user side meets a trigger condition, and displaying a popup window corresponding to a target page;
responding to the trigger operation aiming at the popup window, displaying the target page, and displaying a target object in an initial state on the target page;
displaying action prompt information for updating the display state of the target object, and acquiring a user image;
and determining the eye movement of a target user based on the user image, and displaying a target animation for updating the display state of the target object when detecting that the eye movement meets the action requirement of the action prompt information.
2. The method of claim 1, wherein the trigger condition comprises: the watching time length exceeds the preset time length, and/or the number of the watching contents exceeds the preset number;
the target user side is a user side in an online classroom.
3. The method of claim 1, wherein presenting the action prompt message for updating the presentation status of the target object comprises:
displaying action prompt information at a first preset position of the target page, wherein the action prompt information is at least one of image information, video information and text information; alternatively, the first and second electrodes may be,
and playing voice action prompt information.
4. The method of claim 1, wherein presenting the action prompt message for updating the presentation status of the target object comprises:
and displaying action prompt information matched with the current state of the target object, wherein different action prompt information is used for indicating the user to make different eye actions.
5. The method of claim 1 or 4, wherein after presenting the target animation, the method further comprises:
and returning to the step of executing the action prompt information display until the updated display state of the target object is the target state.
6. The method according to claim 1, wherein in the case where it is detected that the eye movement does not satisfy the movement requirement of the movement cue information, the method further comprises:
and returning to the step of displaying the action prompt information.
7. The method of claim 1, wherein the presenting the target object in the initial state on the target page comprises:
acquiring attribute information of a user side, and determining the target object matched with the attribute information of the user side;
and displaying the target object in an initial state on the target page.
8. The method of claim 1, wherein presenting the action prompt message for updating the presentation status of the target object comprises:
acquiring browsing information of the target user side and attribute information of the target user;
determining target action prompt information based on the browsing information and the attribute information;
and displaying the target action prompt information.
9. The method of claim 1, further comprising:
responding to a page switching instruction, and jumping to a historical page;
acquiring a historical target object corresponding to the user side and a display state of the historical target object; the historical target object is an object displayed in the target page in a historical display process, and the display state of the historical target object is the final state of the historical target object in a single display process;
and respectively displaying the historical target objects according to the display states of the historical target objects.
10. A presentation status control apparatus, comprising:
the first display module is used for responding to the condition that a target user side meets a trigger condition in the process of watching the target content and displaying a popup window corresponding to a target page;
the second display module is used for responding to the trigger operation aiming at the popup window, displaying the target page and displaying the target object in the initial state on the target page;
the acquisition module is used for displaying action prompt information used for updating the display state of the target object and acquiring a user image;
and the third display module is used for determining the eye movement of the target user based on the user image and displaying the target animation used for updating the display state of the target object under the condition that the eye movement is detected to meet the action requirement of the action prompt information.
11. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when a computer device is running, the machine-readable instructions when executed by the processor performing the steps of the presentation state control method of any one of claims 1 to 9.
12. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the presentation state control method according to any one of claims 1 to 9.
CN202111254850.0A 2021-10-27 2021-10-27 Display state control method and device, computer equipment and storage medium Pending CN113986009A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111254850.0A CN113986009A (en) 2021-10-27 2021-10-27 Display state control method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111254850.0A CN113986009A (en) 2021-10-27 2021-10-27 Display state control method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113986009A true CN113986009A (en) 2022-01-28

Family

ID=79742452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111254850.0A Pending CN113986009A (en) 2021-10-27 2021-10-27 Display state control method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113986009A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104503568A (en) * 2014-12-05 2015-04-08 广东小天才科技有限公司 Method and device for realizing terminal use rule
CN108170258A (en) * 2016-12-06 2018-06-15 宋杰 A kind of smart mobile phone eyes protecting system and eye care method
CN109656504A (en) * 2018-12-11 2019-04-19 北京锐安科技有限公司 Screen eye care method, device, terminal and storage medium
CN109885362A (en) * 2018-11-30 2019-06-14 努比亚技术有限公司 Terminal and its eyeshield control method and computer readable storage medium
CN111223549A (en) * 2019-12-30 2020-06-02 华东师范大学 Mobile end system and method for disease prevention based on posture correction
CN112764543A (en) * 2021-01-21 2021-05-07 广东小天才科技有限公司 Information output method, terminal equipment and computer readable storage medium
CN112870035A (en) * 2021-01-08 2021-06-01 深圳创维-Rgb电子有限公司 Intelligent terminal eye protection exercise processing method and device, intelligent terminal and storage medium
CN112947830A (en) * 2021-03-11 2021-06-11 北京高途云集教育科技有限公司 Popup window display method and device, computer equipment and storage medium
CN112988002A (en) * 2021-03-30 2021-06-18 武汉悦学帮网络技术有限公司 Method and device for processing picture book, electronic equipment and storage medium
CN113553156A (en) * 2021-07-27 2021-10-26 北京悦学帮网络技术有限公司 Information prompting method and device, computer equipment and computer storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104503568A (en) * 2014-12-05 2015-04-08 广东小天才科技有限公司 Method and device for realizing terminal use rule
CN108170258A (en) * 2016-12-06 2018-06-15 宋杰 A kind of smart mobile phone eyes protecting system and eye care method
CN109885362A (en) * 2018-11-30 2019-06-14 努比亚技术有限公司 Terminal and its eyeshield control method and computer readable storage medium
CN109656504A (en) * 2018-12-11 2019-04-19 北京锐安科技有限公司 Screen eye care method, device, terminal and storage medium
CN111223549A (en) * 2019-12-30 2020-06-02 华东师范大学 Mobile end system and method for disease prevention based on posture correction
CN112870035A (en) * 2021-01-08 2021-06-01 深圳创维-Rgb电子有限公司 Intelligent terminal eye protection exercise processing method and device, intelligent terminal and storage medium
CN112764543A (en) * 2021-01-21 2021-05-07 广东小天才科技有限公司 Information output method, terminal equipment and computer readable storage medium
CN112947830A (en) * 2021-03-11 2021-06-11 北京高途云集教育科技有限公司 Popup window display method and device, computer equipment and storage medium
CN112988002A (en) * 2021-03-30 2021-06-18 武汉悦学帮网络技术有限公司 Method and device for processing picture book, electronic equipment and storage medium
CN113553156A (en) * 2021-07-27 2021-10-26 北京悦学帮网络技术有限公司 Information prompting method and device, computer equipment and computer storage medium

Similar Documents

Publication Publication Date Title
US10242500B2 (en) Virtual reality based interactive learning
CN110837294B (en) Facial expression control method and system based on eyeball tracking
US20180025050A1 (en) Methods and systems to detect disengagement of user from an ongoing
Mascio et al. Designing games for deaf children: first guidelines
CN111708948A (en) Content item recommendation method, device, server and computer readable storage medium
CN110812843A (en) Interaction method and device based on virtual image and computer storage medium
KR101801332B1 (en) Mathmatics dictionary system of guide type
CN113986009A (en) Display state control method and device, computer equipment and storage medium
KR20150101756A (en) Method of learning words and system thereof
CN113553156A (en) Information prompting method and device, computer equipment and computer storage medium
CN114117106A (en) Intelligent interaction method, device, equipment and storage medium based on children's picture book
KR102543264B1 (en) Systems and methods for digital enhancement of hippocampal regeneration
CN112333473A (en) Interaction method, interaction device and computer storage medium
CN114531406A (en) Interface display method and device and storage medium
CN109726267B (en) Story recommendation method and device for story machine
CN113709308A (en) Usage monitoring method and device for electronic equipment
Fields Incorporation of Generational Learning in Familiar Interfaces and Systems: A Design Fiction
JP7440889B2 (en) Learning support systems and programs
CN113420131B (en) Reading guiding method, device and storage medium for children's drawing book
US11861776B2 (en) System and method for provision of personalized multimedia avatars that provide studying companionship
CN117193921A (en) Galloping method and system of mobile application software display interface
CN115738243A (en) Interaction control method and device, computer equipment and storage medium
CN112966143A (en) Task additional information collection method and device and electronic equipment
CN117354597A (en) Interaction method, device, electronic equipment and storage medium
CN113919328A (en) Method and device for generating article title

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination