CN110908513B - Data processing method and electronic equipment - Google Patents

Data processing method and electronic equipment Download PDF

Info

Publication number
CN110908513B
CN110908513B CN201911127391.2A CN201911127391A CN110908513B CN 110908513 B CN110908513 B CN 110908513B CN 201911127391 A CN201911127391 A CN 201911127391A CN 110908513 B CN110908513 B CN 110908513B
Authority
CN
China
Prior art keywords
user
target operation
operation type
information
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911127391.2A
Other languages
Chinese (zh)
Other versions
CN110908513A (en
Inventor
孙宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911127391.2A priority Critical patent/CN110908513B/en
Publication of CN110908513A publication Critical patent/CN110908513A/en
Application granted granted Critical
Publication of CN110908513B publication Critical patent/CN110908513B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The embodiment of the invention provides a data processing method and electronic equipment, wherein an off-screen camera is fully utilized to acquire user image data, user gaze point information and/or user limb behavior information are determined based on the acquired user image data, a target operation object and a target operation type of a user are identified based on the user gaze point information and/or the user limb behavior information, and corresponding data processing operation is executed on the target operation object based on the target operation type.

Description

Data processing method and electronic equipment
Technical Field
The present invention relates to the field of electronic devices, and in particular, to a data processing method and an electronic device.
Background
At present, with the rapid development of mobile communication technology, electronic devices (such as smart phones) have become essential electronic consumer products in people's daily life, and with the increasing popularization of smart phones, the functions of smart phones are continuously upgraded and optimized, and smart phones have been integrated into various aspects of life.
In the prior art, a new interaction mode which enables a user to be in non-contact with the electronic equipment and can complete corresponding operations can replace a traditional touch screen and voice interaction mode, but the new interaction mode is limited by the function limitation of a front camera, information sent by a plurality of users can be omitted, the electronic equipment cannot be accurately controlled to complete corresponding operations, and the user experience of man-machine interaction is poor.
Therefore, the control operation of the electronic equipment is realized based on the existing man-machine interaction mode, the man-machine interaction mode is single, the individual requirements of users cannot be met, and the man-machine interaction use experience of the users is poor.
Disclosure of Invention
The embodiment of the invention aims to provide a data processing method and electronic equipment, and aims to solve the problems that the existing control operation of the electronic equipment based on a human-computer interaction mode is single, the personalized requirements of users cannot be met and the human-computer interaction use experience of the users is poor due to the fact that the existing touch control modes such as touch input of the users on a touch screen are mainly based on the human-computer interaction mode.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides a data processing method applied to an electronic device, where the electronic device includes a plurality of off-screen cameras, and includes:
acquiring user image data acquired by the plurality of under-screen cameras;
determining user interaction information according to the user image data, wherein the user interaction information comprises: user fixation point information and/or user limb behavior information;
determining a target operation object and a target operation type according to the user interaction information;
and executing corresponding processing operation on the target operation object based on the target operation type.
In a second aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes a plurality of off-screen cameras, and includes:
the image data acquisition module is used for acquiring user image data acquired by the plurality of under-screen cameras;
an interactive information determining module, configured to determine user interactive information according to the user image data, where the user interactive information includes: user fixation point information and/or user limb behavior information;
the target information determining module is used for determining a target operation object and a target operation type according to the user interaction information;
and the data operation control module is used for executing corresponding processing operation on the target operation object based on the target operation type.
In a third aspect, an embodiment of the present invention provides an electronic device, including: memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the data processing method according to the first aspect.
In a fourth aspect, the embodiments of the present invention provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the data processing method according to the first aspect.
According to the data processing method and the electronic equipment in the embodiment of the invention, the off-screen camera is fully utilized to acquire the user image data, the user gaze point information and/or the user limb behavior information are determined based on the acquired user image data, the target operation object and the target operation type of the user are identified based on the user gaze point information and/or the user limb behavior information, and the corresponding data processing operation is executed on the target operation object based on the target operation type, so that the target operation object aimed at by the user gaze point can be accurately positioned in a non-contact touch manner, the man-machine interaction operation is realized, the user does not need to execute the contact touch operation input on the electronic equipment, the user operation steps are simplified, and the user man-machine interaction use experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a first flowchart of a data processing method according to an embodiment of the present invention;
FIG. 2 is a second flowchart of a data processing method according to an embodiment of the present invention;
fig. 3a is a schematic diagram of a process of determining a target operation object in the data processing method according to the embodiment of the present invention;
fig. 3b is a schematic diagram illustrating an effect of adding an application to a folder in the data processing method according to the embodiment of the present invention;
fig. 4 is a schematic diagram illustrating an implementation principle that a target operation type is data update in the data processing method according to the embodiment of the present invention;
fig. 5 is a schematic diagram illustrating an implementation principle that a target operation type is a screenshot in the data processing method according to the embodiment of the present invention;
FIG. 6 is a schematic diagram of a third flowchart of a data processing method according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a module composition of an electronic device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is only a part of the embodiment of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a data processing method and electronic equipment, wherein an off-screen camera is fully utilized to acquire user image data, user fixation point information and/or user limb behavior information are determined based on the acquired user image data, a target operation object and a target operation type of a user are identified based on the user fixation point information and/or the user limb behavior information, and corresponding data processing operation is executed on the target operation object based on the target operation type, so that the target operation object aimed at by a user fixation point can be accurately positioned by adopting a non-contact touch control mode, man-machine interaction operation is realized, the user does not need to execute contact touch control operation input on the electronic equipment, user operation steps are simplified, and the user experience of man-machine interaction is improved.
Fig. 1 is a first schematic flowchart of a data processing method according to an embodiment of the present invention, where the method in fig. 1 can be executed by an electronic device, and in particular, by a program module disposed in the electronic device, where the electronic device includes a plurality of off-screen cameras, as shown in fig. 1, and the method includes at least the following steps:
s101, acquiring user image data acquired by a plurality of under-screen cameras; the multiple under-screen cameras can be cameras arranged under a display screen of the electronic equipment according to a preset distribution rule;
specifically, the off-screen camera collects user image data and transmits the user image data to a program module (i.e., a processor) in the electronic device. The trigger condition for acquiring the image data of the user by the off-screen camera can be preset, for example, the trigger condition can be that the electronic equipment is detected to be in a working state; for another example, considering that if the user does not have a need for performing a human-computer interaction operation based on the user gaze point information, and the under-screen camera also performs acquisition of user image data, the processing amount of the processor of the electronic device will be increased, and therefore, the trigger condition may be a condition for representing that the user has a need for performing a human-computer interaction operation based on the user gaze point information, such as detecting a touch operation of the user for a preset human-computer interaction trigger control.
S102, determining user interaction information according to the acquired user image data, wherein the user interaction information comprises: user fixation point information and/or user limb behavior information;
specifically, because the gaze point of the user is unique and the user image data acquired by the multiple off-screen cameras at the same time are different, the gaze point of the user at a certain time can be determined by performing image recognition on the acquired user image data, and user gaze point information corresponding to the gaze point is acquired; for each continuous processing process of man-machine interaction based on the user gaze point information, if the user gaze point changes along with time, determining the gaze point of each time node based on a plurality of user image data of a certain time period, further determining the movement track of the gaze point in the time period, and determining the user gaze point information according to the movement track of the gaze point; wherein the user's point of regard information includes: at least one of fixation time information, fixation point position information, and fixation point movement trajectory; determining the user point-of-regard information as user interaction information;
specifically, if there is a limb action in the user during the process of acquiring the user image data, the limb action information of the user can be determined by performing image recognition on the acquired user image data, and the limb action information of the user is determined as the user interaction information.
In a specific implementation, the determining process of the user gaze point information specifically includes: the user face fixation point is transmitted to the under-screen camera through reflected light, the reflected light is focused through the under-screen camera, corresponding charges are accumulated according to the intensity of the reflected light, electric signals used for representing user image data are generated through periodic discharging, the under-screen camera transmits the generated electric signals to the processor, and the processor determines the under-screen camera which is directly viewed by a user based on the received electric signals of the under-screen cameras; the user's gaze point is single, the electric signal of one of the multiple under-screen cameras represents that the user looks directly at the under-screen camera, and the gaze point information of the user is determined according to the position information of the under-screen camera watched by the user.
S103, determining a target operation object and a target operation type according to the determined user interaction information;
the target operation object may be an application icon on a display interface of the electronic device, may be web page content in an information browsing page under a certain application program, and may also be file data stored in the electronic device, where the web page content may include: at least one of text, video and picture, the target operation type may include: opening a certain application program, and performing at least one of updating operation, screenshot operation and amplifying operation on a certain target operation object;
specifically, after the user fixation point information and/or the user limb behavior information are determined, a target operation object and a target operation type can be determined; specifically, a target operation object is determined according to user fixation point information and/or user limb behavior information; the target operation object can be determined according to the user fixation point information because the user fixation point information can accurately represent the operation object aimed by the user;
correspondingly, determining a target operation type according to the user fixation point information and/or the user limb behavior information; the target operation type can be determined only according to the user limb behavior information, or only according to the user fixation point information; in order to improve the accuracy of the identification of the target operation type, the target operation type can be determined according to the user fixation point information and the user limb behavior information.
S104, based on the determined target operation type, executing corresponding processing operation on the target operation object;
specifically, after the target operation object and the target operation type are determined, corresponding human-computer interaction operation can be completed, for example, if the target operation object is a certain game application and the target operation type is an application program opening, the operation of opening the game application is automatically executed; for another example, if the target operation object is a classic article under a certain public number and the target operation type is a collection article, the classic article is automatically collected.
Considering that human-computer interaction operation is mainly completed by identifying the user's gaze point, in order to improve the identification accuracy of the user's human-computer interaction requirements of the electronic device, the moving speed of the user's gaze point is also checked in advance, and the moving speed reference range of the user's gaze point when operating the electronic device is determined based on the moving speed of the user's gaze point acquired for multiple times; correspondingly, in the above S101, acquiring user image data acquired by using a plurality of off-screen cameras includes: and acquiring the moving speed of the gazing point of the user, and acquiring user image data acquired by using a plurality of under-screen cameras if the moving speed of the gazing point is within the moving speed reference range. Therefore, the collection of the user image data can be completed under the condition that the user has the requirements of operating objects and operating behaviors based on the gazing point information, namely, the interpretation capability of the electronic equipment on the user behaviors is improved through a deep learning method, so that the collection accuracy of the user image data is improved, and the accuracy of the man-machine interaction operation is further improved.
In the embodiment of the invention, the off-screen camera is fully utilized to collect the user image data, the user fixation point information and/or the user limb behavior information are determined based on the collected user image data, the target operation object and the target operation type of the user are identified based on the user fixation point information and/or the user limb behavior information, and the corresponding data processing operation is executed on the target operation object based on the target operation type, so that the target operation object aimed at by the user fixation point can be accurately positioned in a non-contact touch control mode to realize the man-machine interaction operation, the user does not need to execute the contact touch control operation input on the electronic equipment, the user operation steps are simplified, and the user man-machine interaction use experience is improved.
Specifically, for the determining process of the target operation object and the target operation type, the user interaction information at least includes user gaze point information, as shown in fig. 2, in step S103, determining the target operation object and the target operation type according to the determined user interaction information specifically includes:
s1031, determining a target operation object according to the determined user fixation point information;
specifically, because the user gaze point information can accurately represent the operation object targeted by the user, the target operation object can be determined according to the user gaze point information; in specific implementation, an operation object corresponding to an off-screen camera directly viewed by a user is determined as a target operation object; for example, if it is known from the user gaze point information that an off-screen camera viewed by the user corresponds to a certain game application, the game application is determined as a target operation object.
S1032, determining a target operation type according to the determined user fixation point information and/or the user limb behavior information;
specifically, the target operation type may be determined only according to the user limb behavior information, or only according to the user gaze point information; in order to improve the accuracy of the identification of the target operation type, the target operation type can be determined according to the user fixation point information and the user limb behavior information.
Further, for the determination process of the target operation object, if the user gazing point information includes: in the gaze point movement trajectory within the preset time period, that is, when the user image data is acquired, the gaze point of the user changes, at this time, the gaze point movement trajectory of the user may be determined based on the acquired user image data, and then a plurality of corresponding determined target operation objects may be determined, in step S1031, the determining of the target operation object according to the determined user gaze point information specifically includes:
step one, determining a camera identification set corresponding to a fixation point moving track;
the method comprises the steps that each fixation point is determined by user image data collected by one under-screen camera which is directly viewed by a user, identification information of the under-screen camera which is directly viewed by the user is determined, in a certain time period, due to the fact that the fixation point of the user continuously moves, a fixation point moving track is formed, the number of the under-screen cameras which are directly viewed by the user is multiple, and a camera identification set is determined according to the identification information of the multiple under-screen cameras which are directly viewed by the user.
Step two, determining an operation object indicated by at least one screen camera contained in the determined camera identification set as a target operation object;
the method comprises the steps that at least one under-screen camera contained in a camera identification set is a direct-view camera of a user fixation point, and the fact that the user gazes at a certain position and the watching time is larger than a preset time threshold value is considered, it is indicated that the user needs to operate a target object displayed at the position, therefore, the position information of the at least one under-screen camera contained in the camera identification set is determined, the target object displayed at the position information is determined as a target operation object, and the operation object currently displayed at the position where the direct-view camera is located is determined as the target operation object.
Further, for the determining process of the target operation type, the target operation type may be determined in multiple determining manners, and correspondingly, in step S1032, determining the target operation type according to the determined user gaze point information and/or the determined user limb behavior information includes:
(1) if the user interaction information comprises: the user fixation point information and the user limb behavior information are determined as target operation types according to operation types corresponding to the user limb behavior information aiming at the condition that the target operation types are determined only according to the user limb behavior information;
specifically, a first corresponding relationship between the user limb behavior information and the operation type can be preset, so that in the process of man-machine interaction, the target operation type is determined according to the first corresponding relationship and the currently determined user limb behavior information; for example, the preset nodding motion corresponds to the selected application, and as another example, the nodding motion is preset as the cancellation of the current operation.
In a specific embodiment, as shown in fig. 3a, if the determined user gazing point information indicates that the camera 1 gazed by the user at the current time is known, the application 1 displayed at the position of the camera 1 is determined as a target operation object, and if the determined user limb behavior information indicates that a "nodding" action exists when the user gazes at the camera 1, the target operation type is determined as the selected application 1; correspondingly, selecting operation is executed on the application program 1; in addition, in order to remind the user that the selection operation has been performed on the application program, the icon of the application program 1 may be triggered to start shaking;
next, if it is detected that the user gazes at the cameras 2, 3, 4, 5, 6 in sequence and there is a "nodding" action when gazing at the cameras 2, 3, 4, 5, 6, determining the applications 2, 3, 4, 5, 6 respectively displayed at the positions of the cameras 2, 3, 4, 5, 6 as target operation objects and determining the target operation type as the selected application 2, 3, 4, 5, 6; correspondingly, selecting the application programs 2, 3, 4, 5 and 6 in sequence, and triggering the application program icons to start shaking;
next, if a "blink" action by the user is detected, determining that the target operation type is to add the selected object to the folder; correspondingly, the selected application 2, 3, 4, 5, 6 is added to the target folder, and as shown in fig. 3b, the icon of the selected application 2, 3, 4, 5, 6 is moved into the target folder.
(2) If the user interaction information comprises: the method comprises the steps that user fixation point information is used, and aiming at the condition that a target operation type is determined only according to the user fixation point information, if an under-screen camera corresponding to at least one fixation point in the determined user fixation point information is a preset under-screen camera, the operation type corresponding to the preset under-screen camera is determined as the target operation type;
wherein, camera includes under the above-mentioned preset screen: marking an operation type of an off-screen camera in advance; specifically, a second corresponding relationship between the identification information of the off-screen camera and the operation type can be preset, so that in the man-machine interaction process, the target operation type is determined according to the second corresponding relationship and the currently determined off-screen camera watched by the user; for example, the operation type corresponding to the camera x is marked as an update operation in advance, and the operation type corresponding to the camera y is marked as an unload operation in advance;
in a specific embodiment, as shown in fig. 4, if the determined gazing point information of the user indicates that the camera 2 gazed by the user at the current time is known, the application 2 displayed at the position of the camera 2 is determined as a target operation object, and if the determined target operation object is the application 2, the gazing point of the user moves to the camera x along the direction indicated by the arrow track in the drawing and the operation type corresponding to the camera x is marked as an update operation, the update application is determined as a target operation type; correspondingly, an update operation for application 2 is triggered.
(3) If the user interaction information comprises: the method comprises the steps that user fixation point information is used for determining a target operation type according to the user fixation point information, and if it is determined that fixation starting point information and fixation end point information in the user fixation point information are the same, a first preset operation type is determined as the target operation type;
specifically, a third corresponding relationship between the position relationship between the gaze origin and the gaze destination and the operation type may be preset, so that in the human-computer interaction process, the target operation type is determined according to the third corresponding relationship and the currently determined position relationship between the gaze origin and the gaze destination; for example, presetting that the gaze origin and the gaze destination coincide corresponds to a screenshot operation, or, for example, presetting that the gaze origin and the gaze destination are diagonally distributed corresponds to exiting an application operation.
In a specific embodiment, as shown in fig. 5, if it is known from the determined gazing point information of the user that the user gazes at the camera 1 → 2 → 3 → 4 → 5 → 6 → 1 in sequence, that is, the gazing start point and the gazing end point are the same, the screenshot operation is determined as the target operation type; correspondingly, the screenshot operation is performed on the area surrounded by the camera 1 → 2 → 3 → 4 → 5 → 6 → 1.
(4) If the user interaction information comprises: the user fixation point information is used for determining the target operation type only according to the user fixation point information, and if the fixation time in the user fixation point information is determined to exceed the preset fixation time, the second preset operation is determined as the target operation type;
specifically, a fourth corresponding relationship between the gazing time and the operation type can be preset, so that in the man-machine interaction process, the target operation type is determined according to the fourth corresponding relationship and the currently determined gazing time; for example, the preset gazing time is greater than a first gazing time (such as 2 seconds), and corresponds to a click operation; for another example, different gazing time ranges may be set to correspond to different operation types, where the preset gazing time is greater than the first gazing time and less than the second gazing time and corresponds to a collection operation, and the preset gazing time is greater than the second gazing time and less than the third gazing time and corresponds to a sharing operation.
In a specific embodiment, if the determined user gazing point information indicates that the camera 1 gazed by the user at the current moment, determining an article link displayed at the position of the camera 1 as a target operation object, and if the stay time at the camera 1 is longer than a second gazing time, determining the sharing operation as a target operation type; correspondingly, the sharing operation of the article link is triggered.
Specifically, considering that the off-screen camera currently watched by the user may also be a camera of a pre-marked operation type, and meanwhile, the watching time also exceeds the preset watching time, at this time, the target operation type may be determined according to a preset reference priority, for example, the priority of the camera of the pre-marked operation type is preset higher than the priority of the watching time;
correspondingly, if the watching time exceeds the preset watching time and the under-screen camera corresponding to the watching point is the preset under-screen camera, determining the operation type corresponding to the preset under-screen camera as the target operation type;
in a specific embodiment, under the condition that the electronic device plays a video, if the determined user gazing point information indicates that the user gazing point moves to the camera z in the process of watching the video, and the duration time of gazing the camera z is longer than the preset gazing time, if the operation type corresponding to the camera x is marked as a closing operation, determining the closing application program as a target operation type; correspondingly, closing the application program of the current playing video.
(5) If the user interaction information comprises: in order to improve the accuracy of human-computer interaction recognition and avoid the situation of human-computer interaction misoperation caused by taking a certain limb action of a user and the outside as an analysis object, the user gaze point information and the user limb behavior information are used, so that only when the gaze time is detected to meet the preset condition, the user completes the corresponding limb action within the gaze time, and then executes the corresponding human-computer interaction operation. Based on the situation, the target operation type is determined according to the user limb behavior information and the user fixation point information at the same time, and if the fixation time in the user fixation point information is determined to be larger than a preset time threshold, the target operation type is determined according to the determined operation type corresponding to the user limb behavior information;
for example, the preset gazing time is greater than a first gazing time (such as 2 seconds), and a head nodding action of the user is detected, which corresponds to the selection operation;
in a specific embodiment, still taking the process of adding the selected object to the folder as shown in fig. 3a as an example, after the selection operations are sequentially performed on the applications 2, 3, 4, 5, and 6, if it is detected that the staying time period during which the user watches the selected application is longer than the preset watching time period and a "blink" action of the user is detected within the staying time period, it is determined that the target operation type is to add the selected object to the folder; correspondingly, adding the selected application programs 2, 3, 4, 5 and 6 into the target folder, and moving the icons of the selected application programs 2, 3, 4, 5 and 6 into the target folder as shown in fig. 3 b;
if the 'blinking' action of the user is detected when the user does not watch the selected application program, the 'blinking' action of the user and surrounding people may exist at the moment, and the 'blinking' action is not directed to the limb action triggering human-computer interaction, so the 'blinking' action is not considered.
Further, considering the user limb behavior information may include: information for characterizing a plurality of limb movements within a preset time period; and aiming at each continuous processing process of the human-computer interaction, determining a plurality of target operation types based on a plurality of limb actions detected in the continuous processing process, and executing corresponding processing operations on the target operation objects based on the target operation types in sequence. Correspondingly, the determining the target operation type according to the determined operation type corresponding to the user limb behavior information specifically includes:
determining a plurality of operation types corresponding to a plurality of detected limb actions; wherein, the preset time period for collecting the actions of the limbs comprises: executing a time duration of a complete human-computer interaction process consisting of a plurality of human-computer interaction sub-operations;
specifically, a first corresponding relationship between the limb actions and the operation types can be preset, so that in the process of man-machine interaction, the corresponding operation types are determined according to the first corresponding relationship and the currently determined limb actions; for example, the preset nodding action corresponds to the selected application program, the preset shaking motion is used as the cancel of the current operation, and the preset glaring action corresponds to the magnifying operation.
Determining a target operation type according to the determined multiple operation types; the execution sequence of the target operation type is determined according to the sequence of the multiple limb actions;
specifically, considering that a user may have a need to cancel a current data processing operation within a duration of each complete human-computer interaction process, if a limb action for representing cancellation of a currently executed processing operation is detected, the current data processing operation is cancelled; the completion state of the previous data processing operation can be automatically recovered, so that the user can continuously complete the human-computer interaction; the man-machine interaction can also be automatically ended until the triggering condition that the user enters the man-machine interaction is detected again.
For example, if the gazing starting point and the gazing end point are the same, performing screenshot operation to obtain a screenshot image, and if a "gazing eye" action is detected within a preset time interval, performing amplification operation on the screenshot image; and in the process of amplifying the screenshot image, if the limb action for canceling the currently executed processing operation is detected, canceling the current screenshot image amplifying operation.
And if the limb actions for representing and canceling the currently executed processing operation are not detected, executing corresponding processing operations on the target operation object based on the plurality of target operation types in sequence according to the sequence of the plurality of limb actions.
That is, the execution priority of the target operation type for characterizing the cancellation of the currently executed processing operation is the highest, and if the cancellation of the current operation input is detected, the currently executed processing operation on the target operation object is terminated.
Specifically, in consideration of each complete human-computer interaction process, a user may need to terminate the human-computer interaction process, for example, in the process of defining a screenshot range, the user may need to terminate the currently defined screenshot range and re-define the screenshot range due to an error in the defined range, so that the user inputs a limb action for representing and canceling the currently executed processing operation. Based on this, as shown in fig. 6, in S104, executing the corresponding processing operation on the target operation object based on the determined target operation type specifically includes:
s1041, in the execution process of any target operation type, determining whether the operation type corresponding to the currently detected limb action is used for representing the execution process of canceling the target operation type;
if so, executing S1042, terminating the execution process of the target operation type, and executing corresponding processing operation on the target operation object based on the operation type corresponding to the currently detected limb action;
if not, S1043 is executed, and corresponding processing operations are executed on the target operation object in sequence based on the target operation type according to the detected sequence of the multiple limb actions.
For example, still taking the above fig. 5 as an example for explanation, in the process of delineating the screenshot area, if a "shaking head" action of the user is detected, it is determined that the target operation type is to cancel the current processing operation; the execution priority for representing the type of the target operation for canceling the currently executed processing operation is the highest, so the delineation process of the screenshot area is terminated, and the delineated screenshot area is cancelled.
In order to further improve the diversity of human-computer interaction, in the process of performing human-computer interaction based on the user gaze point information, user voice information may also be introduced, that is, information for determining a target operation object and a target operation type is guided to be input by a user in a human-computer question and answer manner, and based on this, after determining the user interaction information according to the acquired user image data in the above S102, the method further includes:
playing question voice information and acquiring answer voice information input by a user aiming at the question voice information;
correspondingly, in step S103, determining a target operation object and a target operation type according to the determined user interaction information, specifically including:
and determining a target operation object and a target operation type according to the determined user interaction information and the obtained answer voice information.
For example, if it is detected that the user gazes at the camera 1 and the application 1 displayed at the position of the camera 1 is displayed, the question voice message "whether you need the application 1" is played, and if the answer voice message received from the user is yes, the target operation object is determined to be the application 1, and the target operation type is determined to be the application to be opened; accordingly, the processing operation of opening the application 1 is performed. Therefore, the user can be guided to carry out voice input, and the accuracy of man-machine interaction can be further improved.
According to the data processing method in the embodiment of the invention, the off-screen camera is fully utilized to collect user image data, the user gaze point information and/or the user limb behavior information are determined based on the collected user image data, the target operation object and the target operation type of the user are identified based on the user gaze point information and/or the user limb behavior information, and the corresponding data processing operation is executed on the target operation object based on the target operation type, so that the target operation object aimed at by the user gaze point can be accurately positioned by adopting a non-contact touch control mode, the man-machine interaction operation is realized, the user does not need to execute contact touch control operation input on the electronic equipment, the user operation steps are simplified, and the user man-machine interaction use experience is improved.
Corresponding to the data processing method provided in the foregoing embodiment, based on the same technical concept, an embodiment of the present invention further provides an electronic device, where the electronic device includes a plurality of off-screen cameras, and fig. 7 is a schematic diagram of a module composition of the electronic device provided in the embodiment of the present invention, and the electronic device is configured to execute the data processing method described in fig. 1 to fig. 6, and as shown in fig. 7, the electronic device includes:
an image data obtaining module 701, configured to obtain user image data acquired by using the multiple off-screen cameras;
an interaction information determining module 702, configured to determine user interaction information according to the user image data, where the user interaction information includes: user fixation point information and/or user limb behavior information;
a target information determining module 703, configured to determine a target operation object and a target operation type according to the user interaction information;
and a data operation control module 704, configured to execute a corresponding processing operation on the target operation object based on the target operation type.
According to the data processing method and the electronic equipment in the embodiment of the invention, the off-screen camera is fully utilized to acquire the user image data, the user gaze point information and/or the user limb behavior information are determined based on the acquired user image data, the target operation object and the target operation type of the user are identified based on the user gaze point information and/or the user limb behavior information, and the corresponding data processing operation is executed on the target operation object based on the target operation type, so that the target operation object aimed at by the user gaze point can be accurately positioned in a non-contact touch manner, the man-machine interaction operation is realized, the user does not need to execute the contact touch operation input on the electronic equipment, the user operation steps are simplified, and the user man-machine interaction use experience is improved.
Optionally, the target information determining module 703 is specifically configured to:
determining a target operation object according to the user fixation point information;
and determining the target operation type according to the user fixation point information and/or the user limb behavior information.
Optionally, if the user gaze point information includes: the target information determining module 703 is further specifically configured to, in a movement trajectory of the gaze point within a preset time period:
determining a camera identification set corresponding to the moving track of the fixation point;
and determining an operation object indicated by at least one under-screen camera contained in the camera identification set as a target operation object.
Optionally, the target information determining module 703 is further specifically configured to perform at least one of the following determining processes:
determining a target operation type according to the operation type corresponding to the user limb behavior information;
if the off-screen camera corresponding to at least one watching point in the user watching point information is a preset off-screen camera, determining an operation type corresponding to the preset off-screen camera as a target operation type, wherein the preset off-screen camera comprises: marking an operation type of an off-screen camera in advance;
if the fixation starting point information and the fixation end point information in the user fixation point information are the same, determining a first preset operation type as a target operation type;
if the fixation time in the user fixation point information is determined to be larger than a preset time threshold, determining a target operation type by combining the user interaction information;
and if the fixation time in the user fixation point information is determined to be larger than a preset time threshold, determining a target operation type according to the operation type corresponding to the user limb behavior information.
Optionally, the user limb behavior information includes: the target information determining module 703 is further specifically configured to:
determining a plurality of operation types corresponding to the plurality of limb actions;
and determining a target operation type according to the plurality of operation types, wherein the execution sequence of the target operation type is determined according to the sequence of the plurality of limb actions.
Optionally, the data operation control module 704 is specifically configured to:
in the execution process of any target operation type, determining whether the currently detected operation type corresponding to the limb action is used for representing the execution process of canceling the target operation type;
if so, terminating the execution process of the target operation type, and executing corresponding processing operation on the target operation object based on the operation type corresponding to the currently detected limb action.
Optionally, the electronic device further includes: a voice information processing module;
the voice information processing module is used for playing question voice information and acquiring answer voice information input by a user aiming at the question voice information;
correspondingly, the target information determining module 703 is further specifically configured to:
and determining a target operation object and a target operation type according to the user interaction information and the answer voice information.
According to the electronic equipment in the embodiment of the invention, the off-screen camera is fully utilized to collect user image data, the user fixation point information and/or the user limb behavior information are determined based on the collected user image data, the target operation object and the target operation type of the user are identified based on the user fixation point information and/or the user limb behavior information, and corresponding data processing operation is executed on the target operation object based on the target operation type, so that the target operation object aimed at by the user fixation point can be accurately positioned in a non-contact touch control mode, man-machine interaction operation is realized, the user does not need to execute contact touch control operation input on the electronic equipment, the user operation steps are simplified, and the user man-machine interaction use experience is improved.
The electronic device provided by the embodiment of the present invention can implement each process in the embodiment corresponding to the data processing method, and is not described herein again to avoid repetition.
It should be noted that the electronic device provided in the embodiment of the present invention and the data processing method provided in the embodiment of the present invention are based on the same inventive concept, and therefore, for specific implementation of the embodiment, reference may be made to implementation of the data processing method, and repeated details are not described again.
Based on the same technical concept, an embodiment of the present invention further provides an electronic device, where the device is configured to execute the data processing method, fig. 8 is a schematic diagram of a hardware structure of an electronic device for implementing various embodiments of the present invention, and an electronic device 100 shown in fig. 8 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 8 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein, the electronic device 100 further includes: the system comprises a plurality of under-screen cameras, a plurality of display units and a plurality of display units, wherein the under-screen cameras are used for collecting user image data;
wherein, the processor 110 is configured to:
acquiring user image data acquired by the plurality of under-screen cameras;
determining user interaction information according to the user image data, wherein the user interaction information comprises: user fixation point information and/or user limb behavior information;
determining a target operation object and a target operation type according to the user interaction information;
and executing corresponding processing operation on the target operation object based on the target operation type.
In the embodiment of the invention, the off-screen camera is fully utilized to collect the user image data, the user fixation point information and/or the user limb behavior information are determined based on the collected user image data, the target operation object and the target operation type of the user are identified based on the user fixation point information and/or the user limb behavior information, and the corresponding data processing operation is executed on the target operation object based on the target operation type, so that the target operation object aimed at by the user fixation point can be accurately positioned in a non-contact touch control mode to realize the man-machine interaction operation, the user does not need to execute the contact touch control operation input on the electronic equipment, the user operation steps are simplified, and the user man-machine interaction use experience is improved.
Wherein, the processor 110 is further configured to:
determining a target operation object according to the user fixation point information;
and determining the target operation type according to the user fixation point information and/or the user limb behavior information.
Wherein, the processor 110 is further configured to:
if the user gaze point information comprises: a movement trajectory of a point of regard within a preset time period;
the determining a target operation object according to the user gaze point information includes:
determining a camera identification set corresponding to the movement track of the fixation point;
and determining an operation object indicated by at least one under-screen camera contained in the camera identification set as a target operation object.
Wherein, the processor 110 is further configured to:
determining a target operation type according to the user fixation point information and/or the user limb behavior information, wherein the determination comprises at least one of the following determination processes:
determining a target operation type according to the operation type corresponding to the user limb behavior information;
if the off-screen camera corresponding to at least one watching point in the user watching point information is a preset off-screen camera, determining an operation type corresponding to the preset off-screen camera as a target operation type, wherein the preset off-screen camera comprises: marking an operation type of an off-screen camera in advance;
if the fixation starting point information and the fixation end point information in the user fixation point information are the same, determining a first preset operation type as a target operation type;
if the fixation time in the user fixation point information is determined to be larger than a preset time threshold, determining a second preset operation as a target operation type;
and if the fixation time in the user fixation point information is determined to be larger than a preset time threshold, determining a target operation type according to the operation type corresponding to the user limb behavior information.
Wherein, the processor 110 is further configured to:
the user limb behavior information comprises: information for characterizing a plurality of limb movements within a preset time period;
the determining a target operation type according to the operation type corresponding to the user limb behavior information includes:
determining a plurality of operation types corresponding to the plurality of limb actions;
and determining a target operation type according to the operation types, wherein the execution sequence of the target operation type is determined according to the sequence of the limb actions.
Wherein, the processor 110 is further configured to:
the executing the corresponding processing operation on the target operation object based on the target operation type comprises:
in the execution process of any target operation type, determining whether the currently detected operation type corresponding to the limb action is used for representing the execution process of canceling the target operation type;
if so, terminating the execution process of the target operation type, and executing corresponding processing operation on the target operation object based on the operation type corresponding to the currently detected limb action.
Wherein, the processor 110 is further configured to:
after determining the user interaction information according to the user image data, the method further comprises the following steps:
playing question voice information and acquiring answer voice information input by a user aiming at the question voice information;
correspondingly, the determining a target operation object and a target operation type according to the user interaction information includes:
and determining a target operation object and a target operation type according to the user interaction information and the answer voice information.
According to the electronic device 100 in the embodiment of the invention, the off-screen camera is fully utilized to collect user image data, the user gaze point information and/or the user limb behavior information are determined based on the collected user image data, the target operation object and the target operation type of the user are identified based on the user gaze point information and/or the user limb behavior information, and corresponding data processing operation is executed on the target operation object based on the target operation type, so that the target operation object aimed at by the user gaze point can be accurately positioned by adopting a non-contact touch control mode, man-machine interaction operation is realized, the user does not need to execute contact touch control operation input on the electronic device, user operation steps are simplified, and the user man-machine interaction use experience is improved.
It should be noted that the electronic device 100 provided in the embodiment of the present invention can implement each process implemented by the electronic device in the foregoing data processing method embodiment, and for avoiding repetition, details are not described here again.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 102, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the electronic apparatus 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The electronic device 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the electronic device 100 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 8, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the electronic apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 100 or may be used to transmit data between the electronic apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the electronic device. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The electronic device 100 may further include a power source 111 (such as a battery) for supplying power to each component, and preferably, the power source 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 100 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 110, a memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program, when executed by the processor 110, implements each process of the data processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
Further, corresponding to the data processing method provided in the foregoing embodiment, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by the processor 110, the steps of the foregoing data processing method embodiment are implemented, and the same technical effects can be achieved, and are not described herein again to avoid repetition. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described in this disclosure may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described in this disclosure. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
It should also be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the spirit and scope of the invention as defined in the appended claims. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (6)

1. A data processing method is applied to electronic equipment, and is characterized in that the electronic equipment comprises a plurality of under-screen cameras, and the method comprises the following steps:
acquiring user image data acquired by the plurality of under-screen cameras;
determining user interaction information according to the user image data, wherein the user interaction information comprises: user fixation point information and/or user limb behavior information;
determining a target operation object and a target operation type according to the user interaction information;
executing corresponding processing operation on the target operation object based on the target operation type;
determining a target operation object and a target operation type according to the user interaction information comprises the following steps:
if the user fixation point information comprises a fixation point moving track in a preset time period, determining a camera identification set corresponding to the fixation point moving track;
determining an operation object indicated by at least one under-screen camera contained in the camera identification set as a target operation object;
and determining the target operation type according to the user fixation point information and/or the user limb behavior information.
2. The method according to claim 1, wherein the determining a target operation type according to the user gaze point information and/or user limb behavior information comprises at least one of the following determination processes:
determining a target operation type according to the operation type corresponding to the user limb behavior information;
if the off-screen camera corresponding to at least one watching point in the user watching point information is a preset off-screen camera, determining an operation type corresponding to the preset off-screen camera as a target operation type, wherein the preset off-screen camera comprises: marking an operation type of an off-screen camera in advance;
if the fixation starting point information and the fixation end point information in the user fixation point information are the same, determining a first preset operation type as a target operation type;
if the fixation time in the user fixation point information is determined to be larger than a preset time threshold, determining a second preset operation as a target operation type;
and if the fixation time in the user fixation point information is determined to be larger than a preset time threshold, determining a target operation type according to the operation type corresponding to the user limb behavior information.
3. The method of claim 2, wherein the user limb behavior information comprises: information for characterizing a plurality of limb movements within a preset time period;
determining a target operation type according to the operation type corresponding to the user limb behavior information, wherein the determining of the target operation type comprises the following steps:
determining a plurality of operation types corresponding to the plurality of limb actions;
and determining a target operation type according to the plurality of operation types, wherein the execution sequence of the target operation type is determined according to the sequence of the plurality of limb actions.
4. The method of claim 3, wherein the performing the corresponding processing operation on the target operation object based on the target operation type comprises:
in the execution process of any target operation type, determining whether the operation type corresponding to the currently detected limb action is used for representing the execution process of canceling the target operation type;
if so, terminating the execution process of the target operation type, and executing corresponding processing operation on the target operation object based on the operation type corresponding to the currently detected limb action.
5. An electronic device, wherein the electronic device comprises a plurality of under-screen cameras, the electronic device further comprising:
the image data acquisition module is used for acquiring user image data acquired by the plurality of under-screen cameras;
an interactive information determining module, configured to determine user interactive information according to the user image data, where the user interactive information includes: user fixation point information and/or user limb behavior information;
the target information determining module is used for determining a target operation object and a target operation type according to the user interaction information;
the data operation control module is used for executing corresponding processing operation on the target operation object based on the target operation type;
the target information determination module is specifically configured to:
if the user fixation point information comprises a fixation point moving track in a preset time period, determining a camera identification set corresponding to the fixation point moving track;
determining an operation object indicated by at least one under-screen camera contained in the camera identification set as a target operation object;
and determining the target operation type according to the user fixation point information and/or the user limb behavior information.
6. The electronic device of claim 5, wherein the target information determining module is further specifically configured to perform at least one of the following determination procedures:
determining a target operation type according to the operation type corresponding to the user limb behavior information;
if the off-screen camera corresponding to at least one watching point in the user watching point information is a preset off-screen camera, determining an operation type corresponding to the preset off-screen camera as a target operation type, wherein the preset off-screen camera comprises: marking an operation type of an off-screen camera in advance;
if the fixation starting point information and the fixation end point information in the user fixation point information are the same, determining a first preset operation type as a target operation type;
if the fixation time in the user fixation point information is determined to be larger than a preset time threshold, determining a target operation type by combining the user interaction information;
and if the fixation time in the user fixation point information is determined to be larger than a preset time threshold, determining a target operation type according to the operation type corresponding to the user limb behavior information.
CN201911127391.2A 2019-11-18 2019-11-18 Data processing method and electronic equipment Active CN110908513B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911127391.2A CN110908513B (en) 2019-11-18 2019-11-18 Data processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911127391.2A CN110908513B (en) 2019-11-18 2019-11-18 Data processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN110908513A CN110908513A (en) 2020-03-24
CN110908513B true CN110908513B (en) 2022-05-06

Family

ID=69817701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911127391.2A Active CN110908513B (en) 2019-11-18 2019-11-18 Data processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN110908513B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488057B (en) * 2020-03-30 2022-08-09 维沃移动通信有限公司 Page content processing method and electronic equipment
CN111488183A (en) * 2020-04-13 2020-08-04 Oppo广东移动通信有限公司 Application starting method and device based on terminal equipment, terminal equipment and medium
CN112673423A (en) * 2020-04-29 2021-04-16 华为技术有限公司 In-vehicle voice interaction method and equipment
CN111917977A (en) * 2020-07-21 2020-11-10 珠海格力电器股份有限公司 Camera switching method and device applied to intelligent terminal, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106681509A (en) * 2016-12-29 2017-05-17 北京七鑫易维信息技术有限公司 Interface operating method and system
CN107688385A (en) * 2016-08-03 2018-02-13 北京搜狗科技发展有限公司 A kind of control method and device
CN108681399A (en) * 2018-05-11 2018-10-19 北京七鑫易维信息技术有限公司 A kind of apparatus control method, device, control device and storage medium
CN108833753A (en) * 2018-06-29 2018-11-16 维沃移动通信有限公司 A kind of image obtains and application method, terminal and computer readable storage medium
CN109101110A (en) * 2018-08-10 2018-12-28 北京七鑫易维信息技术有限公司 A kind of method for executing operating instructions, device, user terminal and storage medium
CN109669615A (en) * 2018-12-26 2019-04-23 北京七鑫易维信息技术有限公司 Information control method, system, storage medium and electronic equipment
CN110166600A (en) * 2019-05-27 2019-08-23 Oppo广东移动通信有限公司 Electronic equipment and its control method
CN110174937A (en) * 2019-04-09 2019-08-27 北京七鑫易维信息技术有限公司 Watch the implementation method and device of information control operation attentively

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9612656B2 (en) * 2012-11-27 2017-04-04 Facebook, Inc. Systems and methods of eye tracking control on mobile device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107688385A (en) * 2016-08-03 2018-02-13 北京搜狗科技发展有限公司 A kind of control method and device
CN106681509A (en) * 2016-12-29 2017-05-17 北京七鑫易维信息技术有限公司 Interface operating method and system
CN108681399A (en) * 2018-05-11 2018-10-19 北京七鑫易维信息技术有限公司 A kind of apparatus control method, device, control device and storage medium
CN108833753A (en) * 2018-06-29 2018-11-16 维沃移动通信有限公司 A kind of image obtains and application method, terminal and computer readable storage medium
CN109101110A (en) * 2018-08-10 2018-12-28 北京七鑫易维信息技术有限公司 A kind of method for executing operating instructions, device, user terminal and storage medium
CN109669615A (en) * 2018-12-26 2019-04-23 北京七鑫易维信息技术有限公司 Information control method, system, storage medium and electronic equipment
CN110174937A (en) * 2019-04-09 2019-08-27 北京七鑫易维信息技术有限公司 Watch the implementation method and device of information control operation attentively
CN110166600A (en) * 2019-05-27 2019-08-23 Oppo广东移动通信有限公司 Electronic equipment and its control method

Also Published As

Publication number Publication date
CN110908513A (en) 2020-03-24

Similar Documents

Publication Publication Date Title
CN107613131B (en) Application program disturbance-free method, mobile terminal and computer-readable storage medium
CN110908513B (en) Data processing method and electronic equipment
CN108845853B (en) Application program starting method and mobile terminal
CN109343759B (en) Screen-turning display control method and terminal
CN108055408B (en) Application program control method and mobile terminal
CN111092990B (en) Application program sharing method, electronic device and storage medium
CN108737904B (en) Video data processing method and mobile terminal
CN108334272B (en) Control method and mobile terminal
CN108958593B (en) Method for determining communication object and mobile terminal
CN108628217B (en) Wearable device power consumption control method, wearable device and computer-readable storage medium
CN107734170B (en) Notification message processing method, mobile terminal and wearable device
CN107870674B (en) Program starting method and mobile terminal
CN108962187B (en) Screen brightness adjusting method and mobile terminal
CN108958629B (en) Split screen quitting method and device, storage medium and electronic equipment
CN110221795B (en) Screen recording method and terminal
CN110609648A (en) Application program control method and terminal
CN108052258B (en) Terminal task processing method, task processing device and mobile terminal
CN109164908B (en) Interface control method and mobile terminal
KR102163996B1 (en) Apparatus and Method for improving performance of non-contact type recognition function in a user device
CN111367483A (en) Interaction control method and electronic equipment
CN108769206B (en) Data synchronization method, terminal and storage medium
CN108388400B (en) Operation processing method and mobile terminal
CN108089935B (en) Application program management method and mobile terminal
CN111427644B (en) Target behavior identification method and electronic equipment
CN110032320B (en) Page rolling control method and device and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant