CN110682912B - Data processing method, device and machine readable medium - Google Patents

Data processing method, device and machine readable medium Download PDF

Info

Publication number
CN110682912B
CN110682912B CN201810629040.0A CN201810629040A CN110682912B CN 110682912 B CN110682912 B CN 110682912B CN 201810629040 A CN201810629040 A CN 201810629040A CN 110682912 B CN110682912 B CN 110682912B
Authority
CN
China
Prior art keywords
user
screen
notification
content
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810629040.0A
Other languages
Chinese (zh)
Other versions
CN110682912A (en
Inventor
姚维
许侃
马骥
邢冲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Banma Zhixing Network Hongkong Co Ltd
Original Assignee
Banma Zhixing Network Hongkong Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Banma Zhixing Network Hongkong Co Ltd filed Critical Banma Zhixing Network Hongkong Co Ltd
Priority to CN201810629040.0A priority Critical patent/CN110682912B/en
Publication of CN110682912A publication Critical patent/CN110682912A/en
Application granted granted Critical
Publication of CN110682912B publication Critical patent/CN110682912B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a data processing method, a device, equipment and a machine readable medium, wherein the method comprises the following steps: acquiring a fixation point of a user according to eyeball data of the user; and if the user's point of regard is not on the screen or the stay time of the user's point of regard on the screen does not exceed a first time threshold, carrying out voice broadcast on the notified content. According to the embodiment of the application, the user can be prevented from missing the information corresponding to the notification to a certain extent.

Description

Data processing method, device and machine readable medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a data processing method, a data processing apparatus, an apparatus, and a machine-readable medium.
Background
In order to better interact with the user, the application on the terminal such as the mobile phone and the tablet personal computer can prompt the received information or task state information to the user in a notification bar message mode, so that the user can know the relevant information of the application without switching to the corresponding application.
Currently, the prompting process of the notification bar message may include: and displaying a banner notice corresponding to the notice bar message at the top of the screen, wherein the banner notice may have a prompt tone.
However, in some scenarios, it may be inconvenient for the user to see the terminal. For example, in a driving scene, in order to drive safely, a user needs to observe the road conditions around the vehicle, and thus, it is inconvenient to see the terminal. For another example, in a work scene, in order to improve work efficiency, a user needs to pay attention to work, and therefore, it is inconvenient to watch the terminal. In these scenarios, the user may not be able to know the banner notification displayed at the top because the user is not convenient to view the terminal, or the user may miss the information corresponding to the banner notification because the user cannot view the banner notification in time even if he/she knows the banner notification through the alert tone.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present application is to provide a data processing method, which can prevent a user from missing information corresponding to a notification to a certain extent.
Correspondingly, the embodiment of the application also provides a data processing device, a device and a machine readable medium, which are used for ensuring the realization and the application of the method.
In order to solve the above problem, an embodiment of the present application discloses a data processing method, including:
acquiring a fixation point of a user according to eyeball data of the user;
and if the user's point of regard is not on the screen or the stay time of the user's point of regard on the screen does not exceed a first time threshold, carrying out voice broadcast on the notified content.
On the other hand, the embodiment of the present application further discloses a data processing apparatus, including:
the gaze point acquisition module is used for acquiring a gaze point of a user according to eyeball data of the user; and
and the notification content broadcasting module is used for carrying out voice broadcasting on the notified content if the fixation point of the user is not on the screen or the retention time of the fixation point of the user on the screen does not exceed a first time threshold.
In another aspect, an embodiment of the present application further discloses an apparatus, including:
one or more processors; and
one or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause the apparatus to perform one or more of the methods described above.
In yet another aspect, embodiments of the present application disclose one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform one or more of the methods described above.
Compared with the prior art, the embodiment of the application has the following advantages:
in the embodiment of the application, under the condition that the gazing point of the user is not on the screen or the stay time of the gazing point of the user on the screen does not exceed the first time threshold, the attention of the user can be considered not to be on the screen, and under the condition, the content of the notification is broadcasted in voice, so that the user can be prevented from missing the information corresponding to the notification to a certain extent.
The application scenario of the embodiment of the application may include: driving scenes, work scenes, etc. Taking a driving scene as an example, in order to drive safely, a user needs to observe road conditions around a vehicle, so that the user is not focused on a screen of a terminal in terms of driving behavior, and information corresponding to notification is easily missed; in the embodiment of the application, under the condition that the attention of the user is not on the screen, the content of the notification is broadcasted in a voice mode, so that the attention of the driver is more concentrated, and the information corresponding to the notification is not missed any more.
Drawings
FIG. 1 is an illustration of an application environment for a data processing method of the present application;
FIG. 2 is a flow chart of steps of a first embodiment of a data processing method of the present application;
FIG. 3 is a flowchart illustrating steps of a second embodiment of a data processing method according to the present application;
FIG. 4 is a flowchart of the steps of a third embodiment of a data processing method of the present application;
FIG. 5 is a flowchart illustrating the fourth step of an embodiment of a data processing method according to the present application;
FIG. 6 is a flow chart of steps in an embodiment of a data processing method of the present application;
FIG. 7 is a flowchart illustrating the steps of an embodiment of a data processing method;
FIG. 8 is a flow chart of the steps of a seventh embodiment of a data processing method of the present application;
FIG. 9 is a flowchart illustrating the steps of an eighth embodiment of a data processing method of the present application;
FIG. 10 is a schematic diagram of an embodiment of the present application switching the interaction mode to the gesture interaction mode;
FIG. 11 is a block diagram of a data processing apparatus according to an embodiment of the present application; and
fig. 12 is a schematic structural diagram of an apparatus according to an embodiment of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be derived from the embodiments given herein by a person of ordinary skill in the art are intended to be within the scope of the present disclosure.
While the concepts of the present application are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the description above is not intended to limit the application to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the application.
Reference in the specification to "one embodiment," "an embodiment," "a particular embodiment," or the like, means that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, where a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. In addition, it should be understood that items in the list included in the form "at least one of a, B, and C" may include the following possible items: (A); (B); (C); (A and B); (A and C); (B and C); or (A, B and C). Likewise, a listing of items in the form of "at least one of a, B, or C" may mean (a); (B); (C); (A and B); (A and C); (B and C); or (A, B and C).
In some cases, the disclosed embodiments may be implemented as hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be executed by one or more processors. A machine-readable storage medium may be implemented as a storage device, mechanism, or other physical structure (e.g., a volatile or non-volatile memory, a media disk, or other media other physical structure device) for storing or transmitting information in a form readable by a machine.
In the drawings, some structural or methodical features may be shown in a particular arrangement and/or ordering. Preferably, however, such specific arrangement and/or ordering is not necessary. Rather, in some embodiments, such features may be arranged in different ways and/or orders than as shown in the figures. Moreover, the inclusion of structural or methodical features in particular figures is not meant to imply that such features are required in all embodiments and that, in some embodiments, such features may not be included or may be combined with other features.
The embodiment of the application provides a data processing scheme, which can acquire a fixation point of a user according to eyeball data of the user; and under the condition that the user's point of regard is not on the screen or the stay time of the user's point of regard on the screen does not exceed a first time threshold, the content of the notification is broadcasted in voice.
In the embodiment of the application, the fixation point can be used for representing the position of the sight line of the user. The screen may refer to a screen of the terminal. The notification is a notification with a global effect and is displayed at the top of the screen. The notification may be generated for an information source such as information received by the application, information pushed by the application, or a task status message. For example, the information received by the application may include: information sent by the communication opposite terminal; the information pushed by the application may include: reminding information pushed by the navigation application, such as 'please pay attention to safe driving in fatigue driving', 'you are driving for 4 hours continuously and advise you to rest in a nearby service area', and the like; the task status message may include: download progress, or installation progress of the application, etc. It is understood that the embodiment of the present application does not impose a limitation on the specific information source corresponding to the notification.
The embodiment of the application can judge whether the attention of the user is on the screen or not by judging whether the fixation point is on the screen or not or the stay time of the fixation point on the screen; specifically, if the stay time of the point of regard on the screen exceeds a first time threshold, it can be said that the user's attention is on the screen; alternatively, if the point of regard is not on the screen or the dwell time of the point of regard on the screen does not exceed the first time threshold, it may indicate that the user's attention is not on the screen.
According to the embodiment of the application, the processing mode for the notification can be determined according to the judgment result of whether the attention of the user is on the screen, so that the rationality of the processing mode for the notification can be improved. Specifically, in the embodiment of the present application, when the gaze point of the user is not on the screen, or the staying time of the gaze point of the user on the screen does not exceed the first time threshold, it may be considered that the attention of the user is not on the screen, and in this case, the content of the notification is broadcasted in a voice, so that the user may be prevented from missing information corresponding to the notification to a certain extent.
The application scenarios of the embodiment of the present application may include: driving scenes, work scenes, etc. Taking a driving scene as an example, in order to drive safely, a user needs to observe road conditions around a vehicle, so that the user is not focused on a screen of a terminal in terms of driving behavior, and information corresponding to notification is easily missed; in the embodiment of the application, under the condition that the attention of the user is not on the screen, the content of the notification is broadcasted in a voice mode, and the user can be prevented from missing the information corresponding to the notification to a certain extent. It can be understood that the driving scenario and the working scenario are only examples of application scenarios, and in fact, a person skilled in the art may apply the embodiment of the present application to any application scenario according to actual application requirements, and the embodiment of the present application is not limited to a specific application scenario.
The data processing method provided by the embodiment of the present application can be applied to the application environment shown in fig. 1, as shown in fig. 1, the client 100 and the server 200 are located in a wired or wireless network, and the client 100 and the server 200 perform data interaction through the wired or wireless network.
Optionally, the client may run on the terminal, for example, the client may be an APP running on the terminal, such as a navigation APP, an e-commerce APP, an instant messaging APP, an input method APP, or an APP carried by an operating system, and the embodiment of the present application does not limit a specific APP corresponding to the client. Optionally, the above-mentioned devices may specifically include but are not limited to: smart phones, tablet computers, electronic book readers, MP3 (moving Picture Experts Group Audio Layer III) players, MP4 (moving Picture Experts Group Audio Layer IV) players, laptop portable computers, vehicle terminals, desktop computers, set-top boxes, smart televisions, wearable devices, and the like. It is to be understood that the embodiments of the present application are not limited to the specific devices.
Examples of the in-vehicle terminal may include: a HUD (Head Up Display), which is generally installed in front of a driver and can provide necessary driving information for the driver during driving, such as vehicle speed, fuel consumption, navigation, even incoming call of a mobile phone, message reminding, and the like; in other words, HUD collects multiple functions in an organic whole, makes things convenient for the driver to pay close attention to driving road conditions.
Method embodiment one
Referring to fig. 2, a flowchart illustrating steps of a first embodiment of a data processing method according to the present application is shown, which may specifically include the following steps:
step 201, acquiring a fixation point of a user according to eyeball data of the user;
step 202, if the gazing point of the user is not on the screen or the staying time of the gazing point of the user on the screen does not exceed a first time threshold, performing voice broadcast on the notified content.
In step 201, when the user observes the external object through the eyes, the eyeballs are usually in a moving state, such as moving the eyeballs in any one of the upward, downward, left and right directions, opening and closing the eyes, or looking straight ahead.
The gaze point may be used to characterize the location of the user's gaze. In practical applications, the point of regard may include: the orientation of the user's gaze in three-dimensional space, such as a vector, etc.
According to the embodiment of the application, the fixation point of the user can be obtained by using an eyeball tracking method. The eyeball tracking method specifically comprises the following steps: tracking according to the characteristic changes of the eyeballs and the eyeballs periphery; tracking according to the change of the iris angle; or, actively projecting a beam such as infrared rays to the iris to extract features, and tracking according to the extracted features, and the like.
An example of obtaining a point of regard of a user based on eyeball data of the user is provided herein. This example may actively project a beam of infrared light or the like onto the iris to extract features. Specifically, the low-power infrared beam can be used to irradiate the eyeball of the user, the sensor can capture the reflected light reflected by different parts such as pupil, iris, cornea, etc., and the user's gaze point can be determined after the analysis by a complex algorithm.
In an embodiment of the present application, the eyeball tracking apparatus is configured to obtain a point of regard of a user according to eyeball data of the user. The eye tracking apparatus may include: the illumination light source and the camera module are arranged; wherein the illumination light source may include: an infrared or near-infrared LED (light emitting diode) or LED group for illuminating the eye and projecting a fixed pattern (usually a single pattern, such as a circle, a trapezoid, or a slightly complex pattern) to the eye; the camera module is used for shooting different parts of the eye, such as the pupil, the iris, the cornea and the like, so as to shoot the reflected light rays obtained by the figures reflected by the parts, thereby obtaining a vector connecting the pupil center and the cornea reflection spot center, and calculating the fixation point of the user according to the vector by combining an algorithm.
The eye tracking apparatus may be provided in a separate terminal, such as a head-mounted terminal. The eyeball tracking device can be arranged inside the head-mounted terminal, if the head-mounted terminal is a pair of intelligent glasses, the eyeball tracking device can be arranged at a position close to the glasses in a frame of the pair of intelligent glasses, such as positions in the frame corresponding to the upper left corner and the upper right corner of eyes respectively. Alternatively, the eye tracking device may be disposed outside the head-mounted terminal and near the eye.
Of course, the eyeball tracking device may be integrated into a terminal for executing the method of the embodiment of the application, and if the terminal for executing the method of the embodiment of the application is a vehicle-mounted terminal, the eyeball tracking device may be integrated into or out of the vehicle-mounted terminal. It is to be understood that the specific configuration of the eyeball tracking device is not limited in the embodiments of the present application.
In step 202, if the gaze point of the user is not on the screen or the staying time of the gaze point of the user on the screen does not exceed the first time threshold, it may be determined that the attention of the user is not on the screen, and in this case, the content of the notification is broadcasted by voice, so that the user may be prevented from missing the information corresponding to the notification to a certain extent.
In practical applications, TTS (text to speech) technology may be used to convert the content of the notification into a target speech and play the target speech.
In one embodiment of the present application, it may be determined whether the user's point of regard is on the screen. Optionally, an intersection point of the vector corresponding to the gaze point and the plane where the screen is located may be considered, and if the intersection point is located on the screen, the gaze point of the user may be on the screen; otherwise, if the intersection point is not located on the screen, it may be determined that the user's gaze point is not on the screen. In practical applications, the set of coordinate points corresponding to the screen may be determined, and the set of coordinate points corresponding to the screen may be a set of coordinate points corresponding to a rectangle corresponding to the screen, where the rectangle may be determined by the size of the screen or the size of a display area of the screen. The display area of the screen may be an area after the frame of the screen is removed. It can be understood that the embodiment of the present application does not impose any limitation on the specific process of determining whether the gaze point of the user is on the screen.
The first time threshold may be used to characterize a threshold for a gaze time corresponding to a concentration of attention on an object. Specifically, if the gaze time exceeds the first time threshold, it may be said that the user's attention is focused on some object; otherwise, if the gazing time does not exceed the first time threshold, it may indicate that the user's attention is not focused on a certain object. In particular to the embodiments of the present application, the object may be a screen. For example, if the user looks at the driving behavior during driving and only glances at the screen, the user's gaze time on the screen does not exceed the first time threshold, and thus the user's attention may be considered to be off the screen.
The first time threshold can be determined by one skilled in the art or by a user according to actual application requirements. Optionally, a setting interface may be provided for the user, and the first time threshold set by the setting interface may be received, so that the first time threshold meeting the personalized requirements of the user may be obtained, and the accuracy of determining whether the point of regard of the user is on the screen may be further improved. For example, the first time threshold may be 500ms (milliseconds), and the like, and it is understood that the specific first time threshold is not limited by the embodiments of the present application.
In the embodiment of the application, the notification is a notification with a global effect and is displayed at the top of the screen. The notification may be generated for information sources such as information received by the application, information pushed by the application, or task status messages. The notification in step 202 may be a pending notification.
In an alternative embodiment of the present application, step 201 may not have a trigger condition, but may be performed periodically, that is, the gaze point of the user may be acquired periodically according to eyeball data of the user.
In another optional embodiment of the present application, step 201 may have a trigger condition, and the trigger condition may specifically be: an unprocessed notification is detected. That is, in step 201, the gaze point of the user may be obtained according to the eyeball data of the user when the trigger condition is met.
In the case that the gaze point of the user is not on the screen or the stay time of the gaze point of the user on the screen does not exceed the first time threshold, the processing method for the notification in the embodiment of the present application may include: and carrying out voice broadcast on the notified content. It can be understood that, in this case, the notification processing method in the embodiment of the present application may further include: and displaying the content of the notification for the user to view, and the like. It is to be understood that the embodiment of the present application does not impose a limitation on the specific processing manner of the notification.
In summary, in the data processing method according to the embodiment of the present application, when the gaze point of the user is not on the screen or the retention time of the gaze point of the user on the screen does not exceed the first time threshold, it may be considered that the attention of the user is not on the screen, and in this case, the content of the notification is broadcasted by voice, so that the user may be prevented from missing the information corresponding to the notification to a certain extent.
The application scenario of the embodiment of the application may include: driving scenes, work scenes, etc. Taking a driving scene as an example, in order to drive safely, a user needs to observe road conditions around a vehicle, so that the user is not focused on a screen of a terminal in terms of driving behavior, and information corresponding to notification is easily missed; in the embodiment of the application, under the condition that the attention of the user is not on the screen, the content of the notification is broadcasted in voice, so that the user can be prevented from missing the information corresponding to the notification to a certain extent.
Method embodiment two
Referring to fig. 3, a flowchart of steps of a second embodiment of the data processing method in the present application is shown, which specifically includes the following steps:
301, acquiring a fixation point of a user according to eyeball data of the user;
step 302, if the user's gaze point is not on the screen or the stay time of the user's gaze point on the screen does not exceed a first time threshold, starting to perform voice broadcast on the notified content;
with respect to the first method embodiment shown in fig. 2, the method in the embodiment of the present application may further include:
step 303, after the notified content is started to be broadcasted by voice, if the stay time of the point of regard of the user on the screen exceeds a first time threshold, after the content before a symbol is preset in the notified content is broadcasted by voice, stopping the voice broadcasting of the notified content.
After the content of the notification is broadcasted in a voice mode, if the staying time of the point of regard of the user on the screen exceeds a first time threshold value, the attention of the user can be considered to be on the screen; because the user can read the content of the notification under the condition that the attention of the user is on the screen, the embodiment of the application can stop performing voice broadcast on the content of the notification after the content before the preset symbol in the content of the notification is completed by voice broadcast, so that the repetition of voice broadcast and content reading is avoided.
In addition, before the voice broadcast of the notified content is stopped, the voice broadcast of the content before the preset symbol is completed, and the preset symbol can play a role in segmenting the text, so that the integrity of the voice broadcast content can be improved to a certain extent.
Optionally, the preset symbol may include: punctuation marks, special symbols (such as directional arrows), unit symbols, and the like. The punctuation marks are the marks of the auxiliary character recording language and are used for expressing pause, tone and the properties and the functions of words; the preset symbol can be divided, so that the content before the preset symbol is relatively complete content, and the integrity of the voice broadcast content can be improved to a certain extent.
Method embodiment three
Referring to fig. 4, a flowchart illustrating steps of a third embodiment of the data processing method in the present application is shown, which may specifically include the following steps:
step 401, acquiring a fixation point of a user according to eyeball data of the user;
step 402, if the user's gaze point is not on the screen or the stay time of the user's gaze point on the screen does not exceed a first time threshold, starting to perform voice broadcast on the notified content;
with respect to the first method embodiment shown in fig. 2, the method of the embodiment of the present application may further include:
step 403, after the content of the notification is started to be broadcasted in voice, if the staying time of the point of regard of the user on the screen exceeds a first time threshold, displaying an operation interface corresponding to the notification on the screen.
After the content of the notification is broadcasted in a voice mode, if the staying time of the point of regard of the user on the screen exceeds a first time threshold value, the attention of the user can be considered to be on the screen; since the user can be provided with the operation condition in a case where the user's attention is on the screen, an operation interface corresponding to the notification may be displayed on the screen so that the user responds to the notification through the operation interface.
Optionally, the type of the operation interface may be a control. A control may refer to a component that provides or implements user interface functionality, a control is a package for data and methods, and a control may have its own properties and methods. In practical applications, an operation interface (abbreviated as an operation control) of a control type can be displayed at any position of the screen, so that the user can respond to the notification through the operation control. For example, when the screen is a touch screen, the user may click the operation control by touching, so as to trigger the operation control. Optionally, the operation interface may be displayed on the right side of the screen, and of course, the specific position of the operation interface is not limited in the embodiment of the present application.
Those skilled in the art can determine the number and functions of the operation interfaces to be displayed according to the actual application requirements. In an application example of the application, assuming that information originating from a navigation application is notified, the number of corresponding operation interfaces may be 2, functions of two operation interfaces may be a navigation function and an ignore function, respectively, and if a user triggers an operation interface corresponding to the navigation function, the user may enter the navigation interface; alternatively, when the user triggers the operation interface corresponding to the ignore function, the processing of the notification may be stopped, for example, the display of the content of the notification may be stopped, or the content of the notification may be stopped from being broadcasted by voice.
Method example four
Referring to fig. 5, a flowchart illustrating a fourth step of the data processing method according to the embodiment of the present application is shown, which may specifically include the following steps:
step 501, acquiring a fixation point of a user according to eyeball data of the user;
step 502, if the user's gaze point is not on the screen or the stay time of the user's gaze point on the screen does not exceed a first time threshold, starting to perform voice broadcast on the notified content;
with respect to the first method embodiment shown in fig. 2, the method of the embodiment of the present application may further include:
step 503, after completing the voice broadcast of the notified content, monitoring the voice instruction of the user;
and step 504, responding to the notification according to the voice instruction.
After the voice broadcasting of the content of the notification is completed, the embodiment of the application can start a listening mode, wherein the listening mode is used for listening a voice instruction of a user so as to enable the user to perform voice reply on the notification, so that the user can realize response to the notification without transferring attention. For example, in a driving scene, a user can listen to the voice broadcast content to acquire the content of the notification and respond to the notification in a voice manner when the user focuses on the driving behavior, so that the user's attention can be prevented from being distracted, and the driving safety can be improved.
In practical applications, a voice recognition technology may be used to convert a voice command of a user into a text command, and respond to the notification according to the text command.
In an application example of the present application, assuming that the notification is information derived from a navigation application, for example, the content of the notification is "fatigue driving, please pay attention to safe driving", the response instruction of the notification may include: "navigate," or "ignore," etc., where "navigate" is used to enter the navigation interface and "ignore" may be used to stop processing of notifications. It can be understood that, a person skilled in the art may determine a response instruction corresponding to a notification according to the actual application requirement, and a user may respond to the notification by using a voice instruction matched with the response instruction; the embodiment of the present application does not impose any limitation on the specific response command.
Method example five
Referring to fig. 6, a flowchart illustrating a fifth step of an embodiment of a data processing method according to the present application is shown, which may specifically include the following steps:
601, acquiring a fixation point of a user according to eyeball data of the user;
step 602, if the point of regard of the user is not on the screen, playing a prompt tone corresponding to the notification;
step 603, if the user's gaze point is not on the screen or the stay time of the user's gaze point on the screen does not exceed a first time threshold, performing voice broadcast on the notified content.
With respect to the first method embodiment shown in fig. 2, the processing manner for the notification in the embodiment of the present application may include: firstly, step 602 is executed, in the case that the point of regard of the user is not on the screen, a prompt tone corresponding to the notification is played, and the prompt tone can prompt the user about the arrival of the notification; then, step 603 is executed to perform voice broadcast on the notified content so that the user listens to the notified content when the user's gaze point is not on the screen or the stay time of the user's gaze point on the screen does not exceed the first time threshold.
In summary, the prompt tone played first in the embodiment of the present application can prompt the user of the arrival of the notification, so that the user can prepare for the next voice broadcast; on the basis that the user prepares for the next voice broadcast, the voice broadcast is carried out on the content of the notification, and the efficiency of obtaining the content of the notification from the voice by the user can be improved.
Method example six
Referring to fig. 7, a flowchart illustrating steps of a sixth embodiment of the data processing method of the present application is shown, which may specifically include the following steps:
step 701, acquiring a fixation point of a user according to eyeball data of the user;
and step 702, if the staying time of the point of regard of the user on the screen exceeds a first time threshold, not performing voice broadcast on the notified content, and displaying an operation interface corresponding to the notification on the screen.
According to the embodiment of the application, under the condition that the stay time of the fixation point of the user on the screen exceeds the first time threshold value, the attention of the user can be considered to be on the screen, and under the condition, the user can be considered to have reading capability, so that voice broadcasting can not be carried out on the notified content, and the repetition of voice broadcasting and content reading can be avoided to a certain extent.
Further, since the user can have the operation condition in a case where the user's attention is on the screen, an operation interface corresponding to the notification may be displayed on the screen so that the user responds to the notification through the operation interface. The operation interface may refer to the third embodiment of the method shown in fig. 4, which is not described herein again.
Method example seven
Referring to fig. 8, a flowchart illustrating steps of a seventh embodiment of the data processing method of the present application is shown, which may specifically include the following steps:
step 801, acquiring a fixation point of a user according to eyeball data of the user;
step 802, if the notification is detected, judging whether the point of regard of the user is on the screen; if yes, go to step 803, otherwise go to step 804;
step 803, judging whether the stay time of the point of regard of the user on the screen exceeds a first time threshold; if not, executing step 804, otherwise, executing step 808;
step 804, starting to perform voice broadcast on the notified content;
step 805, after the notified content is started to be broadcasted in voice, judging whether the staying time of the point of regard of the user on the screen exceeds a first time threshold, if so, executing step 806, otherwise, executing step 807;
step 806, after the voice broadcast of the content before the preset symbol in the notified content is completed, stopping the voice broadcast of the notified content, and displaying an operation interface corresponding to the notification on the screen;
step 807, continuing voice broadcasting, monitoring a voice instruction of the user after completing the voice broadcasting of the content of the notification, and responding to the notification according to the voice instruction;
and 808, not performing voice broadcasting on the content of the notification, and displaying an operation interface corresponding to the notification on the screen.
In summary, the embodiment of the application can determine the processing mode for the notification according to the judgment result of whether the attention of the user is on the screen, so that the rationality of the processing mode for the notification can be improved.
According to an embodiment, in the embodiment of the present application, when the gaze point of the user is not on the screen or the staying time of the gaze point of the user on the screen does not exceed the first time threshold, the attention of the user may be considered not to be on the screen, and in this case, the content of the notification is broadcasted in a voice, so that the user may be prevented from missing information corresponding to the notification to a certain extent.
According to another embodiment, in the case where the stay time of the user's gaze point on the screen exceeds the first time threshold, the user's attention may be considered to be on the screen, and in this case, the user may be considered to have a reading capability, and therefore, the notified content may not be broadcasted by voice, so that repetition of voice broadcasting and content reading is avoided to some extent.
Further, since the user can have the operation condition in a case where the user's attention is on the screen, an operation interface corresponding to the notification may be displayed on the screen so that the user responds to the notification through the operation interface.
Under the condition of being applied to a driving scene, if the attention of the driver is in front and not on the screen, the attention of the driver can be focused more through voice broadcast, and the corresponding information of the notification is not missed any more.
In addition, according to the embodiment of the application, whether the attention of the user is on the screen or not, that is, whether the user is observing the screen or not, is intelligently judged through eyeball data, so that a corresponding judgment result is obtained. The judgment result can be used for judging whether voice broadcasting needs to be started or stopped, and unnecessary interference caused by hearing is reduced; the judgment result is also used for judging whether an operation interface needs to be provided or not, so that the user can more quickly answer the notification.
Method example eight
Referring to fig. 9, a flowchart illustrating steps of an eighth embodiment of the data processing method of the present application is shown, which may specifically include the following steps:
step 901, acquiring a fixation point of a user according to eyeball data of the user;
step 902, if the user's gaze point is not on the screen or the dwell time of the user's gaze point on the screen does not exceed a first time threshold, performing voice broadcast on the notified content;
with respect to the first method embodiment shown in fig. 2, the method of the embodiment of the present application may further include:
step 903, if the distance between the arm of the user and the screen does not exceed the distance threshold and the hovering time of the arm exceeds a second time threshold, switching the interaction mode to the gesture interaction mode.
The terminal of the embodiment of the application can support multiple interaction modes, and the multiple interaction modes specifically include: a touch mode and a gesture interaction mode.
The touch mode may support a user to interact with the terminal through a touch or mouse or other control manners, however, the touch mode requires a finger of the user or a mouse to precisely trigger the operation interface on the screen, for example, the user presses a specific control to trigger the control, so that the touch mode is difficult to operate, and the user needs to expend more attention, and therefore, in a driving scene, the user needs to expend more attention, thereby affecting safe driving.
In the field of intelligent control, gesture interaction is an important control method. Generally, in a visual range of a terminal having a visual function (e.g., a photographing function), a user makes gestures of a specific shape and corresponding operations set in advance, and when the terminal recognizes the gestures, the corresponding operations are searched for, and the corresponding operations can be automatically performed.
Gestures that may be supported by gesture interactions may include: gestures in different directions, different rotation angles, or different arcs, etc. For example, different orientations of the palm represent different gestures, etc. The gesture interaction mode executes corresponding operation through the gesture of the user, and the user does not need to press down an operation interface on a screen, so that the operation difficulty can be reduced; in particular, the attention consumption can be reduced, and the driving safety can be improved in a driving scene.
The default interaction mode may be a touch mode. In order to reduce the operation difficulty and reduce the attention consumption, the embodiment of the application can switch the interaction mode into the gesture interaction mode according to the switching operation of the user, that is, switch the interaction mode from the touch mode to the gesture operation.
The switching operation in the embodiment of the present application may specifically be: the distance between the arm of the user and the screen does not exceed the distance threshold, and the hovering time of the arm exceeds a second time threshold, that is, the arm of the user is close to the screen and hovers for a long time, where the distance threshold and the second time threshold may be determined by those skilled in the art or by the user according to actual application requirements, and the specific distance threshold and the second time threshold are not limited in the embodiment of the present application.
In an optional embodiment of the present application, the switching the interaction mode into the gesture interaction mode in step 903 may specifically include: displaying the content of the notification in a middle area of a screen, and displaying an operation prompt icon of the notification on at least one side of the middle area.
Alternatively, the content of the notification may be displayed by a panel of the middle area, and the operation prompt icon may be displayed on at least one side (e.g., at least one of the upper side, the lower side, the left side, and the right side) of the middle area. The operation prompt icon can be used for prompting the corresponding operation function, and the orientation of the operation prompt icon can be used for prompting the orientation corresponding to the gesture, so that the user can generate the gesture corresponding to the operation according to the orientation of the operation prompt icon, and the difficulty of the user in memorizing the gesture can be reduced. In an application example of the present application, the operation prompt icon corresponding to the "close operation" may be "x", and the operation prompt icon corresponding to the "close operation" may be "x"
Figure BDA0001699988010000161
And the like. It is to be understood that the operation prompt icon may also be replaced by operation prompt information in a text form, that is, the content of the notification may be displayed in the middle area of the screen, and the operation prompt information of the notification may be displayed on at least one side of the middle area to prompt the orientation corresponding to the gesture and the operation function corresponding to the gesture.
It should be noted that, no matter what state the terminal is (for example, content of a notification displayed on a screen or a navigation interface displayed on the screen), if the switching operation is detected, the interaction mode can be switched to the gesture interaction mode. Therefore, the execution order between step 903 and step 901 or step 902 is not limited in the embodiments of the present application, for example, step 903 may be executed before or after step 901, or step 903 may be executed before or after step 902.
Referring to fig. 10, a schematic diagram of switching an interaction mode to a gesture interaction mode according to an embodiment of the present application is shown, where in a touch mode, a content 1002 of a notification and a corresponding operation interface 1003 thereof may be displayed on a screen 1001; in this case, if it is detected that the palm is in front of the screen and the hovering time exceeds 500ms, it may be considered that the switching operation is detected, and therefore, the interaction mode may be switched to the gesture interaction mode, specifically, the notified content 1002, the operation prompting icon 1004, and the operation prompting icon 1005 may be displayed on the screen 1001, where the operation prompting icon 1004 and the operation prompting icon 1005 are respectively located on the left side and the right side of the notified content 1002, so as to prompt the user to trigger an operation corresponding to the operation prompting icon 1004 through a left-directional gesture, and prompt the user to trigger an operation corresponding to the operation prompting icon 1005 through a right-directional gesture; that is, the orientation corresponding to the gesture can be matched with the orientation corresponding to the operation prompt icon, so that the difficulty of the user in memorizing the gesture is reduced.
It should be noted that if the palm is in front of the screen and the hovering time does not exceed 500ms, or the user is not in front of the screen, it may be considered that the switching operation is not detected, and therefore, the switching of the interactive mode may not be performed.
In addition, in other embodiments of the present application, the interaction mode may be switched to the gesture interaction mode according to the exit operation. A person skilled in the art or a user may determine the quitting operation according to the actual application requirement, for example, the quitting operation may be a change from a palm unfolding state to a hand grasping boxing state, and the like.
In summary, the embodiment of the application can support switching from the touch mode to the gesture interaction mode, specifically, the gesture interaction mode can be started through gesture recognition, so that a user can conveniently respond to a corresponding function of notification through a gesture, and the distraction of the attention of a driver is reduced.
It should be noted that for simplicity of description, the method embodiments are described as a series of acts, but those skilled in the art should understand that the embodiments are not limited by the described order of acts, as some steps can be performed in other orders or simultaneously according to the embodiments. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the embodiments of the application.
The embodiment of the application also provides a data processing device.
Referring to fig. 11, a block diagram of a data processing apparatus according to an embodiment of the present application is shown, which may specifically include the following modules:
a gaze point obtaining module 1101, configured to obtain a gaze point of a user according to eyeball data of the user; and
and a notification content broadcasting module 1102, configured to perform voice broadcasting on the notified content if the gaze point of the user is not on the screen or a retention time of the gaze point of the user on the screen does not exceed a first time threshold.
Optionally, the apparatus may further include:
and the broadcast stopping module is used for stopping the voice broadcast of the notified content after the notified content broadcast module starts to carry out voice broadcast on the notified content, if the retention time of the point of regard of the user on the screen exceeds a first time threshold value, stopping the voice broadcast of the notified content after the content before the preset symbol in the notified content is completed by the voice broadcast.
Optionally, the apparatus may further include:
and the first operation interface display module is used for displaying an operation interface corresponding to the notification on the screen if the retention time of the point of regard of the user on the screen exceeds a first time threshold value after the notification content broadcasting module starts to perform voice broadcasting on the notified content.
Optionally, the apparatus may further include:
the voice monitoring module is used for monitoring the voice instruction of the user after the voice broadcasting of the notified content is finished;
and the response module is used for responding to the notification according to the voice instruction.
Optionally, the apparatus may further include:
and the prompt tone playing module is used for playing the prompt tone corresponding to the notification if the point of regard of the user is not on the screen before the notification content broadcasting module carries out voice broadcasting on the notified content.
Optionally, the apparatus may further include:
and the second operation interface display module is used for not carrying out voice broadcast on the content of the notification and displaying an operation interface corresponding to the notification on the screen if the retention time of the point of regard of the user on the screen exceeds a first time threshold.
Optionally, the apparatus may further include:
and the mode switching module is used for switching the interaction mode into the gesture interaction mode if the distance between the arm of the user and the screen does not exceed the distance threshold and the hovering time of the arm exceeds a second time threshold.
Optionally, the mode switching module may include:
the display sub-module is used for displaying the content of the notification in the middle area of the screen and displaying the operation prompt icon of the notification on at least one side of the middle area.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Embodiments of the application can be implemented as a system or apparatus employing any suitable hardware and/or software for the desired configuration. Fig. 12 schematically illustrates an example apparatus 1300 that can be used to implement various embodiments described herein.
For one embodiment, fig. 12 illustrates an example apparatus 1300, which apparatus 1300 may comprise: one or more processors 1302, a system control module (chipset) 1304 coupled to at least one of the processors 1302, system memory 1306 coupled to the system control module 1304, non-volatile memory (NVM)/storage 1308 coupled to the system control module 1304, one or more input/output devices 1310 coupled to the system control module 1304, and a network interface 1312 coupled to the system control module 1306. The system memory 1306 may include: instruction 1362, the instruction 1362 executable by the one or more processors 1302.
Processor 1302 may include one or more single-core or multi-core processors, and processor 1302 may include any combination of general-purpose processors or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some embodiments, the apparatus 1300 can be a server, a target device, a wireless device, etc., as described in embodiments herein.
In some embodiments, apparatus 1300 may include one or more machine-readable media (e.g., system memory 1306 or NVM/storage 1308) having instructions thereon and one or more processors 1302, which in combination with the one or more machine-readable media, are configured to execute the instructions to implement the modules included in the foregoing apparatus to perform the actions described in embodiments of the present application.
System control module 1304 for one embodiment may include any suitable interface controller to provide any suitable interface to at least one of processors 1302 and/or any suitable device or component in communication with system control module 1304.
System control module 1304 for one embodiment may include one or more memory controllers to provide an interface to system memory 1306. The memory controller may be a hardware module, a software module, and/or a firmware module.
System memory 1306 for one embodiment may be used to load and store data and/or instructions 1362. For one embodiment, system memory 1306 may include any suitable volatile memory, such as suitable DRAM (dynamic random access memory). In some embodiments, system memory 1306 may include: double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
System control module 1304 for one embodiment may include one or more input/output controllers to provide an interface to NVM/storage 1308 and input/output device(s) 1310.
NVM/storage 1308 for one embodiment may be used to store data and/or instructions 1382. NVM/storage 1308 may include any suitable non-volatile memory (e.g., flash memory, etc.) and/or may include any suitable non-volatile storage device(s), e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives, etc.
The NVM/storage 1308 may include storage resources that are physically part of the device on which the device 1300 is installed or that may be accessed by the device and not necessarily part of the device. For example, the NVM/storage 1308 may be accessed over a network via the network interface 1312 and/or through the input/output devices 1310.
Input/output device(s) 1310 for one embodiment may provide an interface for apparatus 1300 to communicate with any other suitable device, and input/output devices 1310 may include communication components, audio components, sensor components, and so forth.
Network interface 1312 of one embodiment may provide an interface for device 1300 to communicate with one or more networks and/or with any other suitable device, and device 1300 may communicate wirelessly with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols, such as to access a communication standard-based wireless network, such as WiFi,2G, or 3G, or a combination thereof.
For one embodiment, at least one of the processors 1302 may be packaged together with logic for one or more controllers (e.g., memory controllers) of the system control module 1304. For one embodiment, at least one of the processors 1302 may be packaged together with logic for one or more controllers of the system control module 1304 to form a System In Package (SiP). For one embodiment, at least one of the processors 1302 may be integrated on the same novelty as the logic of one or more controllers of the system control module 1304. For one embodiment, at least one of processors 1302 may be integrated on the same chip with logic for one or more controllers of system control module 1304 to form a system on a chip (SoC).
In various embodiments, apparatus 1300 may include, but is not limited to: a computing device such as a desktop computing device or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, apparatus 1300 may have more or fewer components and/or different architectures. For example, in some embodiments, device 1300 may include one or more cameras, keyboards, liquid Crystal Display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, application Specific Integrated Circuits (ASICs), and speakers.
Wherein, if the display comprises a touch panel, the display screen may be implemented as a touch screen display to receive input signals from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
Embodiments of the present application further provide a non-volatile readable storage medium, where one or more modules (programs) are stored in the storage medium, and when the one or more modules are applied to an apparatus, the one or more modules may cause the apparatus to execute instructions (instructions) of methods in embodiments of the present application.
Provided in one example is an apparatus comprising: one or more processors; and, instructions in one or more machine-readable media stored thereon, which when executed by the one or more processors, cause the apparatus to perform a method as in embodiments of the present application, which may include: the method shown in fig. 2 or fig. 3 or fig. 4 or fig. 5 or fig. 6 or fig. 7 or fig. 8 or fig. 9.
One or more machine-readable media are also provided in one example, having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform a method as in embodiments of the application, which may include: the method shown in fig. 2 or fig. 3 or fig. 4 or fig. 5 or fig. 6 or fig. 7 or fig. 8 or fig. 9.
The specific manner in which each module performs operations of the apparatus in the above embodiments has been described in detail in the embodiments related to the method, and will not be described in detail here, and reference may be made to part of the description of the method embodiments for relevant points.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing detailed description has been made of a data processing method, a data processing apparatus, a device, and a machine-readable medium, which are provided by the present application, and specific examples are applied herein to explain the principles and embodiments of the present application, and the descriptions of the foregoing examples are only used to help understand the method and its core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (16)

1. A data processing method, comprising:
acquiring a fixation point of a user according to eyeball data of the user;
if the user's gaze point is not on the screen or the stay time of the user's gaze point on the screen does not exceed a first time threshold, performing voice broadcast on the notified content;
and if the distance between the arm of the user and the screen does not exceed the distance threshold and the hovering time of the arm exceeds a second time threshold, switching the interaction mode to a gesture interaction mode.
2. The method of claim 1, further comprising:
after the notified content is subjected to voice broadcasting, if the stay time of the point of regard of the user on the screen exceeds a first time threshold, after the voice broadcasting of the content before a symbol is preset in the notified content is completed, the voice broadcasting of the notified content is stopped.
3. The method of claim 1, further comprising:
after the content of the notification is started to be broadcasted in a voice mode, if the staying time of the point of regard of the user on the screen exceeds a first time threshold value, displaying an operation interface corresponding to the notification on the screen.
4. The method of claim 1, further comprising:
after the voice broadcast of the notified content is finished, monitoring a voice instruction of the user;
and responding to the notification according to the voice instruction.
5. The method according to any one of claims 1 to 4, wherein before the voice broadcasting of the content of the notification, the method further comprises:
and if the point of regard of the user is not on the screen, playing a prompt tone corresponding to the notification.
6. The method according to any one of claims 1 to 4, further comprising:
and if the stay time of the point of regard of the user on the screen exceeds a first time threshold, not carrying out voice broadcast on the content of the notification, and displaying an operation interface corresponding to the notification on the screen.
7. The method according to claim 1, wherein the switching the interaction mode to the gesture interaction mode comprises:
displaying the content of the notification in a middle area of a screen, and displaying an operation prompt icon of the notification on at least one side of the middle area.
8. A data processing apparatus, comprising:
the gaze point acquisition module is used for acquiring a gaze point of a user according to eyeball data of the user; and
the notification content broadcasting module is used for carrying out voice broadcasting on the notified content if the fixation point of the user is not on the screen or the retention time of the fixation point of the user on the screen does not exceed a first time threshold;
and the mode switching module is used for switching the interaction mode into the gesture interaction mode if the distance between the arm of the user and the screen does not exceed the distance threshold and the hovering time of the arm exceeds a second time threshold.
9. The apparatus of claim 8, further comprising:
and the broadcast stopping module is used for stopping the voice broadcast of the notified content after the notified content broadcast module starts to carry out voice broadcast on the notified content, if the retention time of the point of regard of the user on the screen exceeds a first time threshold value, stopping the voice broadcast of the notified content after the content before the preset symbol in the notified content is completed by the voice broadcast.
10. The apparatus of claim 8, further comprising:
and the first operation interface display module is used for displaying the operation interface corresponding to the notification on the screen if the retention time of the point of regard of the user on the screen exceeds a first time threshold after the notification content broadcasting module starts to perform voice broadcasting on the notified content.
11. The apparatus of claim 8, further comprising:
the voice monitoring module is used for monitoring the voice instruction of the user after the voice broadcasting of the notified content is finished;
and the response module is used for responding to the notification according to the voice instruction.
12. The apparatus of any of claims 8 to 11, further comprising:
and the prompt tone playing module is used for playing the prompt tone corresponding to the notification if the point of regard of the user is not on the screen before the notification content broadcasting module carries out voice broadcasting on the notified content.
13. The apparatus of any of claims 8 to 11, further comprising:
and the second operation interface display module is used for not carrying out voice broadcast on the content of the notification and displaying an operation interface corresponding to the notification on the screen if the retention time of the point of regard of the user on the screen exceeds a first time threshold.
14. The apparatus of claim 8, wherein the mode switching module comprises:
the display sub-module is used for displaying the content of the notification in the middle area of the screen and displaying the operation prompt icon of the notification on at least one side of the middle area.
15. An apparatus for notification processing, comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the apparatus to perform the method recited by one or more of claims 1-7.
16. One or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform the method recited by one or more of claims 1-7.
CN201810629040.0A 2018-06-19 2018-06-19 Data processing method, device and machine readable medium Active CN110682912B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810629040.0A CN110682912B (en) 2018-06-19 2018-06-19 Data processing method, device and machine readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810629040.0A CN110682912B (en) 2018-06-19 2018-06-19 Data processing method, device and machine readable medium

Publications (2)

Publication Number Publication Date
CN110682912A CN110682912A (en) 2020-01-14
CN110682912B true CN110682912B (en) 2023-03-31

Family

ID=69106230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810629040.0A Active CN110682912B (en) 2018-06-19 2018-06-19 Data processing method, device and machine readable medium

Country Status (1)

Country Link
CN (1) CN110682912B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113495620A (en) * 2020-04-03 2021-10-12 百度在线网络技术(北京)有限公司 Interactive mode switching method and device, electronic equipment and storage medium
CN111731311B (en) * 2020-05-29 2023-04-07 阿波罗智联(北京)科技有限公司 Vehicle-mounted machine running safety control method, device, equipment and storage medium
CN112109730A (en) * 2020-06-10 2020-12-22 上汽通用五菱汽车股份有限公司 Reminding method based on interactive data, vehicle and readable storage medium
CN112633854A (en) * 2020-12-31 2021-04-09 重庆电子工程职业学院 Student archive management system based on block chain

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10120438B2 (en) * 2011-05-25 2018-11-06 Sony Interactive Entertainment Inc. Eye gaze to alter device behavior
US10013053B2 (en) * 2012-01-04 2018-07-03 Tobii Ab System for gaze interaction
US9580081B2 (en) * 2014-01-24 2017-02-28 Tobii Ab Gaze driven interaction for a vehicle
CN106445461B (en) * 2016-10-25 2022-02-15 北京小米移动软件有限公司 Method and device for processing character information
CN107360320A (en) * 2017-06-30 2017-11-17 维沃移动通信有限公司 A kind of method for controlling mobile terminal and mobile terminal
CN107608514A (en) * 2017-09-20 2018-01-19 维沃移动通信有限公司 Information processing method and mobile terminal

Also Published As

Publication number Publication date
CN110682912A (en) 2020-01-14

Similar Documents

Publication Publication Date Title
CN110682912B (en) Data processing method, device and machine readable medium
US9897808B2 (en) Smart glass
US10416789B2 (en) Automatic selection of a wireless connectivity protocol for an input device
US10534534B2 (en) Method for controlling display, storage medium, and electronic device
JP6096276B2 (en) Selective backlight of display based on eye tracking
US20240137462A1 (en) Display apparatus and control methods thereof
US9952711B2 (en) Electronic device and method of processing screen area of electronic device
US10916057B2 (en) Method, apparatus and computer program for displaying an image of a real world object in a virtual reality enviroment
US8743021B1 (en) Display device detecting gaze location and method for controlling thereof
US9317113B1 (en) Gaze assisted object recognition
KR20160108388A (en) Eye gaze detection with multiple light sources and sensors
KR20130081117A (en) Mobile terminal and control method therof
KR102656528B1 (en) Electronic device, external electronic device and method for connecting between electronic device and external electronic device
US20130300759A1 (en) Method and apparatus for modifying the presentation of information based on the attentiveness level of a user
CN108958587B (en) Split screen processing method and device, storage medium and electronic equipment
US10474324B2 (en) Uninterruptable overlay on a display
CN111137208B (en) Method, device and system for controlling illuminating lamp of storage box applied to automobile
US10389947B2 (en) Omnidirectional camera display image changing system, omnidirectional camera display image changing method, and program
WO2022222688A1 (en) Window control method and device
WO2021259176A1 (en) Method, mobile device, head-mounted display, and system for estimating hand pose
CN110618750A (en) Data processing method, device and machine readable medium
US11995899B2 (en) Pointer-based content recognition using a head-mounted device
US20240134492A1 (en) Digital assistant interactions in extended reality
US11435857B1 (en) Content access and navigation using a head-mounted device
EP4377778A1 (en) Detecting notable occurrences associated with events

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40021390

Country of ref document: HK

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201218

Address after: Room 603, 6 / F, Roche Plaza, 788 Cheung Sha Wan Road, Kowloon, China

Applicant after: Zebra smart travel network (Hong Kong) Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

GR01 Patent grant
GR01 Patent grant