CN106815264B - Information processing method and system - Google Patents

Information processing method and system Download PDF

Info

Publication number
CN106815264B
CN106815264B CN201510869366.7A CN201510869366A CN106815264B CN 106815264 B CN106815264 B CN 106815264B CN 201510869366 A CN201510869366 A CN 201510869366A CN 106815264 B CN106815264 B CN 106815264B
Authority
CN
China
Prior art keywords
information
mobile terminal
user
feature data
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510869366.7A
Other languages
Chinese (zh)
Other versions
CN106815264A (en
Inventor
胡久林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201510869366.7A priority Critical patent/CN106815264B/en
Publication of CN106815264A publication Critical patent/CN106815264A/en
Application granted granted Critical
Publication of CN106815264B publication Critical patent/CN106815264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Abstract

The embodiment of the invention discloses an information processing method and system. The method comprises the following steps: obtaining operation information and image data; the operation information is obtained by detecting a trigger operation aiming at a display interface of a first application when the first application in the mobile terminal is in an activated state; the image data is acquired by an image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane; analyzing the image data, identifying facial feature data in the image data, and obtaining facial expression information based on the facial feature data; first feedback information is generated based on the facial expression information and the operation information.

Description

Information processing method and system
Technical Field
The present invention relates to information processing technologies, and in particular, to an information processing method and system.
Background
With the rapid development of internet technology, a large number of mobile terminal Applications (APPs) emerge. The applications pay attention to user experience, pay attention to whether the interaction experience of the users and the functions of the applications can meet the requirements of the users practically or not, and the feedback and evaluation of the users to the applications are important bases for optimizing programs of data operation, product managers, designers and software engineers.
In the prior art, the following methods are mainly used for collecting feedback of applications: 1. the user actively scores or comments through the application store; 2. the user actively carries out comment or scoring feedback through a website or a forum; 3. collection of specified parameter information. Of these, the above-described modes 1 and 2 occupy most of the proportion. In the process that the user uses the application, the application can irregularly prompt the user to go to the application store for scoring or commenting, and the feedback collection mode is not convenient and flexible enough, so that the user experience is poor. The parameter information specified by the mode 3 cannot intuitively correspond to the specific feedback experience of the user and cannot be specific to the specific use process of the user.
Disclosure of Invention
In order to solve the existing technical problem, embodiments of the present invention provide an information processing method and system, which can directly obtain a use feedback situation of a user.
In order to achieve the above purpose, the technical solution of the embodiment of the present invention is realized as follows:
the embodiment of the invention provides an information processing method, which comprises the following steps:
obtaining operation information and image data; the operation information is obtained by detecting a trigger operation aiming at a display interface of a first application when the first application in the mobile terminal is in an activated state; the image data is acquired by an image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane;
identifying facial feature data in the image data, and obtaining facial expression information based on the facial feature data;
first feedback information is generated based on the facial expression information and the operation information.
In the foregoing solution, the generating first feedback information based on the facial expression information and the operation information includes:
obtaining corresponding first user experience parameters based on the facial expression information and/or the operation information, and generating first feedback information of an operation position corresponding to the operation information based on the first user experience parameters.
In the above solution, after analyzing the image data and identifying facial feature data in the image data, the method further includes:
obtaining relative position information and direction information of the mobile terminal;
associating the eye feature data, the relative position information and the direction information in the face feature data to obtain the information of the attention point; the point of interest information characterizes focus position information of the eye on the mobile terminal.
In the above scheme, the method further comprises: and generating second feedback information based on the point of interest information and the operation information.
In the foregoing solution, the generating second feedback information based on the point of interest information and the operation information includes:
and obtaining a corresponding second user experience parameter based on the attention point information in a preset time period and the operation information in the preset time period, and generating second feedback information of an operation position corresponding to the operation information based on the second user experience parameter.
In the foregoing solution, the generating first feedback information based on the facial expression information and the operation information includes:
generating first feedback information based on the facial expression information, the point of interest information, and the operation information.
In the foregoing solution, the generating first feedback information based on the facial expression information, the point of interest information, and the operation information includes:
and obtaining a corresponding third user experience parameter by combining operation information in a preset time period based on the facial expression information and the attention point information in the preset time period, and generating first feedback information of an operation position corresponding to the operation information based on the third user experience parameter.
An embodiment of the present invention further provides an information processing system, where the system includes: an acquisition unit, an image processing unit and an information generation unit; wherein the content of the first and second substances,
the acquisition unit is used for acquiring operation information and image data; the operation information is obtained by detecting a trigger operation aiming at a display interface of a first application when the first application in the mobile terminal is in an activated state; the image data is acquired by an image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane;
the image processing unit is used for identifying facial feature data in the image data and obtaining facial expression information based on the facial feature data;
the information generating unit is used for generating first feedback information based on the facial expression information obtained by the image processing unit and the operation information obtained by the obtaining unit.
In the foregoing solution, the information generating unit is configured to obtain a corresponding first user experience parameter based on the facial expression information and/or the operation information, and generate first feedback information of an operation position corresponding to the operation information based on the first user experience parameter.
In the above scheme, the obtaining unit is further configured to obtain relative position information and direction information of the mobile terminal itself;
the image processing unit is further configured to associate eye feature data in the facial feature data, the relative position information of the mobile terminal obtained by the obtaining unit and the direction information to obtain information of a point of interest; the point of interest information characterizes focus position information of the eye on the mobile terminal.
In the foregoing solution, the information generating unit is further configured to generate second feedback information based on the point of interest information and the operation information.
In the foregoing solution, the information generating unit is configured to obtain a corresponding second user experience parameter based on the point-of-interest information in a preset time period and the operation information in the preset time period, and generate second feedback information of an operation position corresponding to the operation information based on the second user experience parameter.
In the foregoing aspect, the information generating unit is configured to generate first feedback information based on the facial expression information, the point of interest information, and the operation information.
In the foregoing solution, the information generating unit is configured to obtain, based on the facial expression information and the attention point information in a preset time period, a corresponding third user experience parameter in combination with operation information in the preset time period, and generate, based on the third user experience parameter, first feedback information of an operation position corresponding to the operation information.
The information processing method and the system of the embodiment of the invention comprise the following steps: obtaining operation information and image data; the operation information is obtained by detecting a trigger operation aiming at a display interface of a first application when the first application in the mobile terminal is in an activated state; the image data is acquired by an image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane; identifying facial feature data in the image data, and obtaining facial expression information based on the facial feature data; first feedback information is generated based on the facial expression information and the operation information. By adopting the technical scheme of the embodiment of the invention, the facial expression information representing the user is identified and the operation information is obtained in the process that the user uses the first application, and the first feedback information is generated based on the facial expression information and the operation information and is used as a basis for further optimizing and modifying the first application, so that on one hand, the direct and active acquisition of the user feedback information is realized, the user does not need to log in an application store or a website for commenting or grading, and the operation experience of the user is greatly improved; on the other hand, the technical scheme of the embodiment of the invention collects and identifies information in the using process of the user, so that the specific position of poor user experience in the first application can be known conveniently, the specific problems can be perfected or optimized in the subsequent operation and maintenance process, and a detailed basis is provided for the operation and maintenance of the application.
Drawings
Fig. 1 is a schematic view of an application scenario of an information processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an information processing method according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating an information processing method according to a second embodiment of the present invention;
FIG. 4 is a flowchart illustrating an information processing method according to a third embodiment of the present invention;
fig. 5 is a schematic diagram of a configuration of an information processing system according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a schematic view of an application scenario of an information processing method according to an embodiment of the present invention; as shown in fig. 1, includes as a server 11 and a mobile terminal 12; the mobile terminal 12 and the server 11 may be connected through a network (e.g., a wired network and/or a wireless network). At least one Application (APP) is pre-installed in the mobile terminal 12. The server 11 may be the server or a server cluster for the at least one application; the server 11 may also be a server or a server cluster to which a third-party user experience information statistics platform belongs.
The technical solutions of the embodiments of the present invention are applied to the server 11 and the mobile terminal 12; when an application is running through the mobile terminal 12, operation information of a user for the application display interface is obtained, meanwhile, facial feature data of the user are obtained, the operation information and the facial feature data are sent to the server 11 to be analyzed and processed, so that the server 11 identifies facial expression information of the user according to the facial feature data, and then first feedback information is obtained based on the facial expression information and the operation information, wherein the first feedback information indicates that the facial expression of the user is pleasant, calm, or not full at the position where the operation information contains, and therefore whether the operation experience of the user is not good or not due to the position where the operation information contains is known, namely whether optimization improvement is needed or not.
The above example of fig. 1 is only an example of an application architecture for implementing the embodiment of the present invention, and the embodiment of the present invention is not limited to the application architecture described in the above fig. 1, and various embodiments of the present invention are proposed based on the application architecture.
Example one
The embodiment of the invention provides an information processing method. FIG. 2 is a flowchart illustrating an information processing method according to a first embodiment of the present invention; as shown in fig. 2, the information processing method includes:
step 201: obtaining operation information and image data; the operation information is obtained by detecting a trigger operation aiming at a display interface of a first application when the first application in the mobile terminal is in an activated state; the image data is acquired by an image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane.
The information processing method described in this embodiment is applied to an information processing system; the information processing system may be implemented by a server in this embodiment, but may be implemented by both a mobile terminal and a server in other embodiments. In this step, the obtaining operation information and image data includes: the server obtains operation information and image data of the mobile terminal.
Specifically, when the mobile terminal activates a first application, a display interface representing the first application is output, and a trigger operation aiming at the display interface is detected to obtain operation information; the operation information comprises operation gesture information and operation position information; wherein the operation gesture information comprises: a single tap gesture, a double tap gesture, a swipe gesture, a drag gesture, a zoom gesture, a rotate gesture, a parameter (e.g., volume parameter, brightness parameter, etc.) adjust gesture, etc.; the operation position information is operation position information of the operation gesture; the operation position information can be specific to a function key. Further, the operational information may also be continuous operational information over a period of time.
When the trigger operation aiming at the display interface is detected, the mobile terminal generates a first instruction, and an image acquisition unit of the mobile terminal is enabled based on the first instruction; in other embodiments, the mobile terminal may also enable the image acquisition unit of the mobile terminal when the first application is activated, or the mobile terminal may also enable the image acquisition unit of the mobile terminal based on the detected trigger instruction, which is not specifically limited in this embodiment. In this embodiment, the image acquisition unit and the display unit of the mobile terminal are on the same plane, and it can be understood that the image acquisition unit can be implemented by a front camera of the mobile terminal.
In this embodiment, the mobile terminal sends the obtained operation information and the corresponding image data to an information processing system.
Step 202: facial feature data in the image data are identified, and facial expression information is obtained based on the facial feature data.
In this step, the information processing system analyzes the image data, first performs pre-processing (e.g., de-noising, normalization of pixel locations or illumination variables) on the image data, and segmentation, localization, or tracking of the face, etc. Further, facial feature data extraction is performed on the image data, including conversion of pixel data into representations of the shape, motion, color, muscle and spatial structure of the face and its components, and the extracted facial feature data is used for subsequent expression classification. Further, an expression classifier is preset in the information processing system, the expression classifier includes a plurality of sets of corresponding relations between facial feature data and expression information, or the expression classifier includes an expression classification model, the facial feature data is input into the expression classifier, and expression information corresponding to the facial feature data is output, that is, facial expression information is obtained based on the facial feature data. The face analysis method and the expression information recognition described in this step may refer to any analysis recognition method in the related art, and are not specifically described in this embodiment.
Of course, in other embodiments, this step may also be implemented by the mobile terminal, that is, the mobile terminal performs analysis and identification according to the obtained image data to obtain the facial expression information, and then sends the facial expression information to the information processing system. For specific implementation, reference may be made to the above description, which is not repeated here.
Step 203: first feedback information is generated based on the facial expression information and the operation information.
Here, the generating of the first feedback information based on the facial expression information and the operation information includes: obtaining corresponding first user experience parameters based on the facial expression information and/or the operation information, and generating first feedback information of an operation position corresponding to the operation information based on the first user experience parameters.
Specifically, the information processing system stores a plurality of sets of corresponding relations between facial expression information and first user experience parameters in advance. For example, when the facial expression information is joyful, the corresponding first user experience parameter is 5; when the facial expression information is calm, the corresponding first user experience parameter is 3; when the facial expression information is not satisfied, the corresponding first user experience parameter is 0. Of course, in other embodiments, the facial expression information and the first user experience parameter may also be preset in other corresponding manners, which is not described in detail in this embodiment. That is to say, the facial expression information can represent the experience of the user, so that when the experience of the user represented by the facial expression information reaches a first preset threshold, it indicates that the experience of the user is good, first feedback information corresponding to the operation information (including operation position information) is generated according to the operation information (including operation position information) corresponding to the facial expression information, at this time, the first feedback information indicates that the first application brings better experience to the user at the operation position, and the operation function or the provided content at the operation position is worth recommending; correspondingly, when the user experience represented by the facial expression information does not reach a second preset threshold, the second preset threshold is smaller than the first preset threshold, that is, the user experience is poor, first feedback information corresponding to the operation information (including the operation position information) is generated according to the operation information (including the operation position information) corresponding to the facial expression information, at the moment, the first feedback information indicates that the first application brings poor experience to the user at the operation position, and the operation function or the provided content at the operation position needs to be further optimized or improved.
As another embodiment, the information processing system obtains the corresponding first user experience parameter based on a combination of the facial expression information and the operation information. For example, when the facial expression information is pleasant and the operation information includes operation times smaller than a first threshold, the corresponding first user experience parameter is 5; when the facial expression information is calm and the operation times contained in the operation information are greater than a first threshold and less than a second threshold, the corresponding first user experience parameter is 3; and when the facial expression information is not full and the operation times contained in the operation information are greater than a second threshold value, the corresponding first user experience parameter is 0. Of course, in other embodiments, the facial expression information and the first user experience parameter may also be preset in other corresponding manners, which is not described in detail in this embodiment. That is to say, the combination of the facial expression information and the operation information can represent the experience of the user, for example, in a scene, the user can see a function entry through a plurality of sliding operations when the user wants to find the function entry through the input operation, and the user can expose an unpleasant expression at the moment; in such a scenario, the information processing system obtains facial expression information representing unpleasant expressions and operation information including multiple sliding operations, and combines the facial expression information and the operation information to obtain a corresponding first user experience parameter of 0, which indicates that the user experience is poor at an operation position corresponding to the operation information and needs to be optimized or improved.
By adopting the technical scheme of the embodiment of the invention, the facial expression information of the user is identified and the operation information is obtained in the process that the user uses the first application, and the first feedback information is generated based on the facial expression information and the operation information and is used as a basis for further optimizing and modifying the first application, so that on one hand, the direct and active acquisition of the user feedback information is realized, the user does not need to log in an application store or a website for commenting or grading, and the operation experience of the user is greatly improved; on the other hand, the technical scheme of the embodiment of the invention collects and identifies information in the using process of the user, so that the specific position of poor user experience in the first application can be known conveniently, the specific problems can be perfected or optimized in the subsequent operation and maintenance process, and a detailed basis is provided for the operation and maintenance of the application.
Example two
The embodiment of the invention provides an information processing method. FIG. 3 is a flowchart illustrating an information processing method according to a second embodiment of the present invention; as shown in fig. 3, the information processing method includes:
step 301: obtaining operation information and image data; the operation information is obtained by detecting a trigger operation aiming at a display interface of a first application when the first application in the mobile terminal is in an activated state; the image data is acquired by an image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane.
The information processing method described in this embodiment is applied to an information processing system; the information processing system may be implemented by a server in this embodiment, but may be implemented by both a mobile terminal and a server in other embodiments. In this step, the obtaining operation information and image data includes: the information processing system obtains operation information and image data of the mobile terminal.
Specifically, when the mobile terminal activates a first application, a display interface representing the first application is output, and a trigger operation aiming at the display interface is detected to obtain operation information; the operation information comprises operation gesture information and operation position information; wherein the operation gesture information comprises: a single tap gesture, a double tap gesture, a swipe gesture, a drag gesture, a zoom gesture, a rotate gesture, a parameter (e.g., volume parameter, brightness parameter, etc.) adjust gesture, etc.; the operation position information is operation position information of the operation gesture; the operation position information can be specific to a function key. Further, the operational information may also be continuous operational information over a period of time.
When the trigger operation aiming at the display interface is detected, the mobile terminal generates a first instruction, and an image acquisition unit of the mobile terminal is enabled based on the first instruction; in other embodiments, the mobile terminal may also enable the image acquisition unit of the mobile terminal when the first application is activated, or the mobile terminal may also enable the image acquisition unit of the mobile terminal based on the detected trigger instruction, which is not specifically limited in this embodiment. In this embodiment, the image acquisition unit and the display unit of the mobile terminal are on the same plane, and it can be understood that the image acquisition unit can be implemented by a front camera of the mobile terminal.
In this embodiment, the mobile terminal sends the obtained operation information and the corresponding image data to an information processing system.
Step 302: identifying facial feature data in the image data; and obtaining relative position information and direction information of the mobile terminal itself.
In this step, the information processing system analyzes the image data, first performs pre-processing (e.g., de-noising, normalization of pixel locations or illumination variables) on the image data, and segmentation, localization, or tracking of the face, etc. Further, extracting facial feature data from the image data, including converting pixel data into representations of the shape, motion, color, muscle and spatial structure of the face and its components; the specific manner of extracting the facial feature data may refer to any face recognition manner in the prior art, and is not specifically described in this embodiment.
In this embodiment, the information processing system obtains relative position information and direction information of the mobile terminal; the relative position information is the relative position relationship between the mobile terminal and the holder. Specifically, at least one of the following sensing units is arranged in the mobile terminal: the mobile terminal comprises a gravity sensing unit, an acceleration sensing unit, a distance sensing unit, an iris recognition unit and the like, and specifically, the mobile terminal can pass through the gravity sensing unit or the acceleration sensing unit to obtain direction information, the direction information can be the included angle of the gravity center direction of the mobile terminal relative to the long side direction or the short side direction of the mobile terminal, and the direction information is also the posture change information of the mobile terminal. The mobile terminal may further obtain relative position information between the mobile terminal and a holder through the distance sensing unit or the iris recognition unit, wherein the distance sensing unit and the iris recognition unit are generally disposed on the same plane of the mobile terminal display unit, and when a user holds the mobile terminal, a distance from the user may be detected through the distance sensing unit, or a relative orientation between the mobile terminal and the user's eyes may be recognized through the iris recognition unit.
Step 303: associating the eye feature data, the relative position information and the direction information in the face feature data to obtain the information of the attention point; the point of interest information characterizes focus position information of the eye on the mobile terminal.
In this embodiment, the information processing system associates eye feature data included in the facial feature data with the relative position information of the mobile terminal itself and the direction information to obtain point-of-interest information, where the point-of-interest information represents focused position information of an eye of a holding user of the mobile terminal on the mobile terminal, and may also be understood as position information of content browsed by the eye of the holding user. Specifically, the information processing system may obtain gaze direction information of the eyes of the holding user based on the eye feature data, further determine a relative positional relationship between the mobile terminal and the holding user based on the relative positional information of the mobile terminal itself and the direction information, obtain a focus range in which the eyes of the holding user are focused on the mobile terminal based on the gaze direction information and the relative positional relationship, and generate the point-of-interest information based on the focus range.
Step 304: and generating second feedback information based on the point of interest information and the operation information.
Here, the generating second feedback information based on the point of interest information and the operation information includes: and obtaining a corresponding second user experience parameter based on the attention point information in a preset time period and the operation information in the preset time period, and generating second feedback information of an operation position corresponding to the operation information based on the second user experience parameter.
Specifically, the information processing system obtains a corresponding second user experience parameter based on a combination of the point of interest information and the operation information within a preset time period. For example, when the range proportion of the change of the point of interest information (i.e. the focusing position of the eye holding the user on the mobile terminal) is greater than a first threshold and the operation times included in the operation information is greater than a second threshold within a preset time period t, the corresponding second user experience parameter is 0; when the range proportion of the change of the point-of-interest information (namely the focusing position of the eyes of the holding user on the mobile terminal) is larger than a third threshold and smaller than a first threshold (the first threshold is smaller than the third threshold), and the operation times contained in the operation information are smaller than a second threshold and larger than a fourth threshold (the fourth threshold is smaller than the second threshold), the corresponding second user experience parameter is 3; when the focus point information (i.e. the focusing position of the eye holding the user on the mobile terminal) is not changed, or the proportion of the range of the change is smaller than a third threshold, and the operation times included in the operation information is smaller than a fourth threshold, the corresponding second user experience parameter is 5. Of course, in other embodiments, the information processing system may also refer to other embodiments for obtaining the focus information by associating the eye feature data, the relative position information, and the direction information, that is, the information processing system may obtain the focus position information of the eye of the user on the mobile terminal according to the eye feature data, the relative position information, and the direction information, and refer to any image recognition and modeling technology in the prior art, which is not described in detail in this embodiment. If in a scene, a user wants to find a function entrance through input operation, the user can find the position of the function entrance everywhere; the range proportion of the change of the user's attention point information obtained by the information processing system is greater than a first threshold; correspondingly, in the process that the user searches for the position of the function entrance, multiple times of trigger operation are carried out, namely the operation times included in the operation information obtained by the information processing system are greater than a second threshold value; in this scenario, it takes a long time for the user to find the function entry, which indicates that the application brings a poor operation experience to the user at the current operation position, and needs to be optimized or improved.
By adopting the technical scheme of the embodiment of the invention, the attention point information of the user is identified and the operation information is obtained in the process that the user uses the first application, and the second feedback information is generated based on the attention point information and the operation information, so that the second feedback information is used as a basis for further optimizing and modifying the first application, therefore, on one hand, the direct and active acquisition of the user feedback information is realized, the user does not need to log in an application store or a website for commenting or grading, and the operation experience of the user is greatly improved; on the other hand, the technical scheme of the embodiment of the invention collects and identifies information in the using process of the user, so that the specific position of poor user experience in the first application can be known conveniently, the specific problems can be perfected or optimized in the subsequent operation and maintenance process, and a detailed basis is provided for the operation and maintenance of the application.
EXAMPLE III
The embodiment of the invention provides an information processing method. FIG. 4 is a flowchart illustrating an information processing method according to a third embodiment of the present invention; as shown in fig. 4, the information processing method includes:
step 401: obtaining operation information and image data; the operation information is obtained by detecting a trigger operation aiming at a display interface of a first application when the first application in the mobile terminal is in an activated state; the image data is acquired by an image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane.
The information processing method described in this embodiment is applied to an information processing system; the information processing system may be implemented by a server in this embodiment, but may be implemented by both a mobile terminal and a server in other embodiments. In this step, the obtaining operation information and image data includes: the information processing system obtains operation information and image data of the mobile terminal.
Specifically, when the mobile terminal activates a first application, a display interface representing the first application is output, and a trigger operation aiming at the display interface is detected to obtain operation information; the operation information comprises operation gesture information and operation position information; wherein the operation gesture information comprises: a single tap gesture, a double tap gesture, a swipe gesture, a drag gesture, a zoom gesture, a rotate gesture, a parameter (e.g., volume parameter, brightness parameter, etc.) adjust gesture, etc.; the operation position information is operation position information of the operation gesture; the operation position information can be specific to a function key. Further, the operational information may also be continuous operational information over a period of time.
When the trigger operation aiming at the display interface is detected, the mobile terminal generates a first instruction, and an image acquisition unit of the mobile terminal is enabled based on the first instruction; in other embodiments, the mobile terminal may also enable the image acquisition unit of the mobile terminal when the first application is activated, or the mobile terminal may also enable the image acquisition unit of the mobile terminal based on the detected trigger instruction, which is not specifically limited in this embodiment. In this embodiment, the image acquisition unit and the display unit of the mobile terminal are on the same plane, and it can be understood that the image acquisition unit can be implemented by a front camera of the mobile terminal.
In this embodiment, the mobile terminal sends the obtained operation information and the corresponding image data to an information processing system.
Step 402: facial feature data in the image data are identified, and facial expression information is obtained based on the facial feature data.
In this step, the information processing system analyzes the image data, first performs pre-processing (e.g., de-noising, normalization of pixel locations or illumination variables) on the image data, and segmentation, localization, or tracking of the face, etc. Further, facial feature data extraction is performed on the image data, including conversion of pixel data into representations of the shape, motion, color, muscle and spatial structure of the face and its components, and the extracted facial feature data is used for subsequent expression classification. Further, an expression classifier is preset in the information processing system, the expression classifier includes a plurality of sets of corresponding relations between facial feature data and expression information, or the expression classifier includes an expression classification model, the facial feature data is input into the expression classifier, and expression information corresponding to the facial feature data is output, that is, facial expression information is obtained based on the facial feature data. The face analysis method and the expression information recognition described in this step may refer to any analysis recognition method in the related art, and are not specifically described in this embodiment.
Of course, in other embodiments, this step may also be implemented by the mobile terminal, that is, the mobile terminal performs analysis and identification according to the obtained image data to obtain the facial expression information, and then sends the facial expression information to the information processing system. For specific implementation, reference may be made to the above description, which is not repeated here.
Step 403: obtaining relative position information and direction information of the mobile terminal; associating the eye feature data, the relative position information and the direction information in the face feature data to obtain the information of the attention point; the point of interest information characterizes focus position information of the eye on the mobile terminal.
In this embodiment, the information processing system obtains relative position information and direction information of the mobile terminal; the relative position information is the relative position relationship between the mobile terminal and the holder. Specifically, at least one of the following sensing units is arranged in the mobile terminal: the mobile terminal comprises a gravity sensing unit, an acceleration sensing unit, a distance sensing unit, an iris recognition unit and the like, and specifically, the mobile terminal can pass through the gravity sensing unit or the acceleration sensing unit to obtain direction information, the direction information can be the included angle of the gravity center direction of the mobile terminal relative to the long side direction or the short side direction of the mobile terminal, and the direction information is also the posture change information of the mobile terminal. The mobile terminal may further obtain relative position information between the mobile terminal and a holder through the distance sensing unit or the iris recognition unit, wherein the distance sensing unit and the iris recognition unit are generally disposed on the same plane of the mobile terminal display unit, and when a user holds the mobile terminal, a distance from the user may be detected through the distance sensing unit, or a relative orientation between the mobile terminal and the user's eyes may be recognized through the iris recognition unit.
Further, the information processing system associates eye feature data included in the facial feature data, relative position information of the mobile terminal itself, and the direction information to obtain point-of-interest information, where the point-of-interest information represents focused position information of an eye of a holding user of the mobile terminal on the mobile terminal, and may also be understood as position information of content browsed by the eye of the holding user. Specifically, the information processing system may obtain gaze direction information of the eyes of the holding user based on the eye feature data, further determine a relative positional relationship between the mobile terminal and the holding user based on the relative positional information of the mobile terminal itself and the direction information, obtain a focus range in which the eyes of the holding user are focused on the mobile terminal based on the gaze direction information and the relative positional relationship, and generate the point-of-interest information based on the focus range.
Step 404: generating first feedback information based on the facial expression information, the point of interest information, and the operation information.
Here, the generating first feedback information based on the facial expression information, the point of interest information, and the operation information includes:
and obtaining a corresponding third user experience parameter by combining operation information in a preset time period based on the facial expression information and the attention point information in the preset time period, and generating first feedback information of an operation position corresponding to the operation information based on the third user experience parameter.
Specifically, the information processing system obtains a corresponding third user experience parameter based on facial expression information in a preset time period and the focus point information in the preset time period in combination with operation information. For example, within a preset time period t, the obtained facial expression information is pleasant, the focus point information (i.e., the focusing position of the eyes of the user on the mobile terminal) is not changed, or the proportion of the range of the change is smaller than a third threshold, and when the operation times included in the operation information is smaller than a fourth threshold, the corresponding second user experience parameter is 5; or, when the obtained facial expression information is calm, the range proportion of the change of the focus point information (i.e., the focus position of the eyes of the user on the mobile terminal) is greater than a third threshold and smaller than a first threshold (the first threshold is smaller than the third threshold), and the operation times included in the operation information is smaller than a second threshold and larger than a fourth threshold (the fourth threshold is smaller than the second threshold), the corresponding second user experience parameter is 3; or, if the obtained facial expression information is not full, the range proportion of the change of the focus point information (i.e., the focusing position of the eyes of the user on the mobile terminal) is greater than a first threshold, and the operation times included in the operation information is greater than a second threshold, the corresponding second user experience parameter is 0. Of course, the manner in which the information processing system obtains the third user experience parameter may also refer to other manners, which are not described in detail in this embodiment. If in a scene, a user wants to find a function entrance through input operation, the user can find the position of the function entrance everywhere; the range proportion of the change of the user's attention point information obtained by the information processing system is greater than a first threshold; correspondingly, in the process that the user searches for the position of the function entrance, multiple times of trigger operation are carried out, namely the operation times included in the operation information obtained by the information processing system are greater than a second threshold value; correspondingly, under the scene that the user spends a long time searching for the position of the function entrance, the user can expose unpleasant expressions; in this scenario, the information processing system determines that the current user experience is poor, that is, it takes a long time for the user to find the function entry, based on the analysis of the obtained facial expression information, the point of interest information, and the operation information, indicating that the application brings a poor operation experience to the user at the current operation position, and needs to be optimized or improved.
By adopting the technical scheme of the embodiment of the invention, the facial expression information of the user, the focus point information of the user and the acquired operation information are identified in the process that the user uses the first application, and the first feedback information is generated based on the facial expression information, the focus point information and the operation information, so that the first feedback information is used as a basis for further optimizing and modifying the first application, therefore, on one hand, the direct and active acquisition of the feedback information of the user is realized, the user does not need to log in an application store or a website for commenting or scoring, and the operation experience of the user is greatly improved; on the other hand, the technical scheme of the embodiment of the invention collects and identifies information in the using process of the user, so that the specific position of poor user experience in the first application can be known conveniently, the specific problems can be perfected or optimized in the subsequent operation and maintenance process, and a detailed basis is provided for the operation and maintenance of the application.
Example four
The embodiment of the invention also provides an information processing system. FIG. 5 is a block diagram of an information processing system according to an embodiment of the present invention; as shown in fig. 5, the system includes: an acquisition unit 51, an image processing unit 52, and an information generation unit 53; wherein the content of the first and second substances,
the acquiring unit 51 is configured to acquire operation information and image data; the operation information is obtained by detecting a trigger operation aiming at a display interface of a first application when the first application in the mobile terminal is in an activated state; the image data is acquired by an image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane;
the image processing unit 52 is configured to identify facial feature data in the image data obtained by the obtaining unit 51, and obtain facial expression information based on the facial feature data;
the information generating unit 53 is configured to generate first feedback information based on the facial expression information obtained by the image processing unit 52 and the operation information obtained by the obtaining unit 51.
The information generating unit 53 is configured to obtain a corresponding first user experience parameter based on the facial expression information, and generate first feedback information of an operation position corresponding to the operation information based on the first user experience parameter.
In this embodiment, when the mobile terminal activates the first application, a display interface representing the first application is output, and a trigger operation for the display interface is detected to obtain operation information; the operation information comprises operation gesture information and operation position information; wherein the operation gesture information comprises: a single tap gesture, a double tap gesture, a swipe gesture, a drag gesture, a zoom gesture, a rotate gesture, a parameter (e.g., volume parameter, brightness parameter, etc.) adjust gesture, etc.; the operation position information is operation position information of the operation gesture; the operation position information can be specific to a function key. Further, the operational information may also be continuous operational information over a period of time.
When the trigger operation aiming at the display interface is detected, the mobile terminal generates a first instruction, and an image acquisition unit of the mobile terminal is enabled based on the first instruction; in other embodiments, the mobile terminal may also enable the image acquisition unit of the mobile terminal when the first application is activated, or the mobile terminal may also enable the image acquisition unit of the mobile terminal based on the detected trigger instruction, which is not specifically limited in this embodiment. In this embodiment, the image acquisition unit and the display unit of the mobile terminal are on the same plane, and it can be understood that the image acquisition unit can be implemented by a front camera of the mobile terminal. Further, the mobile terminal sends the obtained operation information and the corresponding image data to an information processing system.
In this embodiment, the image processing unit 52 analyzes the image data, first performs preprocessing (such as denoising, normalization of pixel positions or illumination variables), and segmentation, localization, or tracking of a face. Further, facial feature data extraction is performed on the image data, including conversion of pixel data into representations of the shape, motion, color, muscle and spatial structure of the face and its components, and the extracted facial feature data is used for subsequent expression classification. Further, an expression classifier is preset in the information processing system, the expression classifier includes a plurality of sets of corresponding relations between facial feature data and expression information, or the expression classifier includes an expression classification model, the facial feature data is input into the expression classifier, and expression information corresponding to the facial feature data is output, that is, facial expression information is obtained based on the facial feature data. The face analysis method and the expression information recognition described in the present embodiment may refer to any analysis recognition method in the related art, and are not specifically described in the present embodiment.
In this embodiment, the information generating unit 53 stores a plurality of sets of corresponding relationships between facial expression information and first user experience parameters in advance. For example, when the facial expression information is joyful, the corresponding first user experience parameter is 5; when the facial expression information is calm, the corresponding first user experience parameter is 3; when the facial expression information is not satisfied, the corresponding first user experience parameter is 0. Of course, in other embodiments, the facial expression information and the first user experience parameter may also be preset in other corresponding manners, which is not described in detail in this embodiment. That is to say, the facial expression information can represent the experience of the user, so that when the experience of the user represented by the facial expression information reaches a first preset threshold, it indicates that the experience of the user is good, first feedback information corresponding to the operation information (including operation position information) is generated according to the operation information (including operation position information) corresponding to the facial expression information, at this time, the first feedback information indicates that the first application brings better experience to the user at the operation position, and the operation function or the provided content at the operation position is worth recommending; correspondingly, when the user experience represented by the facial expression information does not reach a second preset threshold, the second preset threshold is smaller than the first preset threshold, that is, the user experience is poor, first feedback information corresponding to the operation information (including the operation position information) is generated according to the operation information (including the operation position information) corresponding to the facial expression information, at the moment, the first feedback information indicates that the first application brings poor experience to the user at the operation position, and the operation function or the provided content at the operation position needs to be further optimized or improved.
As another embodiment, the information generating unit 53 obtains the corresponding first user experience parameter based on a combination of the facial expression information and the operation information. For example, when the facial expression information is pleasant and the operation information includes operation times smaller than a first threshold, the corresponding first user experience parameter is 5; when the facial expression information is calm and the operation times contained in the operation information are greater than a first threshold and less than a second threshold, the corresponding first user experience parameter is 3; and when the facial expression information is not full and the operation times contained in the operation information are greater than a second threshold value, the corresponding first user experience parameter is 0. Of course, in other embodiments, the facial expression information and the first user experience parameter may also be preset in other corresponding manners, which is not described in detail in this embodiment. That is to say, the combination of the facial expression information and the operation information can represent the experience of the user, for example, in a scene, the user can see a function entry through a plurality of sliding operations when the user wants to find the function entry through the input operation, and the user can expose an unpleasant expression at the moment; in such a scenario, the information processing system obtains facial expression information representing unpleasant expressions and operation information including multiple sliding operations, and combines the facial expression information and the operation information to obtain a corresponding first user experience parameter of 0, which indicates that the user experience is poor at an operation position corresponding to the operation information and needs to be optimized or improved.
It should be understood by those skilled in the art that the functions of the processing modules in the information processing system according to the embodiment of the present invention may be understood by referring to the description of the information processing method, and the processing modules in the information processing system according to the embodiment of the present invention may be implemented by analog circuits that implement the functions described in the embodiment of the present invention, or by running software that performs the functions described in the embodiment of the present invention on an intelligent terminal.
EXAMPLE five
An embodiment of the present invention further provides an information processing system, and as shown in fig. 5, the system includes: an acquisition unit 51, an image processing unit 52, and an information generation unit 53; wherein the content of the first and second substances,
the acquiring unit 51 is configured to acquire operation information and image data; the operation information is obtained by detecting a trigger operation aiming at a display interface of a first application when the first application in the mobile terminal is in an activated state; the image data is acquired by an image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane; the mobile terminal is also used for obtaining the position information and the direction information of the mobile terminal;
the image processing unit 52 is configured to identify facial feature data in the image data obtained by the obtaining unit 51, associate eye feature data in the facial feature data with the position information and the direction information of the mobile terminal itself obtained by the obtaining unit 51, and obtain point-of-interest information; the focus point information represents focusing position information of eyes on the mobile terminal;
the information generating unit 53 is configured to generate second feedback information based on the point-of-interest information obtained by the image processing unit 52 and the operation information obtained by the obtaining unit 51.
The information generating unit 53 is configured to obtain a corresponding second user experience parameter based on the point of interest information in a preset time period and the operation information in the preset time period, and generate second feedback information of an operation position corresponding to the operation information based on the second user experience parameter.
In this embodiment, when the mobile terminal activates the first application, a display interface representing the first application is output, and a trigger operation for the display interface is detected to obtain operation information; the operation information comprises operation gesture information and operation position information; wherein the operation gesture information comprises: a single tap gesture, a double tap gesture, a swipe gesture, a drag gesture, a zoom gesture, a rotate gesture, a parameter (e.g., volume parameter, brightness parameter, etc.) adjust gesture, etc.; the operation position information is operation position information of the operation gesture; the operation position information can be specific to a function key. Further, the operational information may also be continuous operational information over a period of time. When the trigger operation aiming at the display interface is detected, the mobile terminal generates a first instruction, and an image acquisition unit of the mobile terminal is enabled based on the first instruction; in other embodiments, the mobile terminal may also enable the image acquisition unit of the mobile terminal when the first application is activated, or the mobile terminal may also enable the image acquisition unit of the mobile terminal based on the detected trigger instruction, which is not specifically limited in this embodiment. In this embodiment, the image acquisition unit and the display unit of the mobile terminal are on the same plane, and it can be understood that the image acquisition unit can be implemented by a front camera of the mobile terminal.
The image processing unit 52 analyzes the image data, first pre-processes the image data (e.g. de-noising, normalization of pixel positions or illumination variables), and segmentation, localization or tracking of faces, etc. Further, extracting facial feature data from the image data, including converting pixel data into representations of the shape, motion, color, muscle and spatial structure of the face and its components; the specific manner of extracting the facial feature data may refer to any face recognition manner in the prior art, and is not specifically described in this embodiment.
In this embodiment, the obtaining unit 51 obtains the relative position information and the direction information of the mobile terminal; the relative position information is the relative position relationship between the mobile terminal and the holder. Specifically, at least one of the following sensing units is arranged in the mobile terminal: the mobile terminal comprises a gravity sensing unit, an acceleration sensing unit, a distance sensing unit, an iris recognition unit and the like, and specifically, the mobile terminal can pass through the gravity sensing unit or the acceleration sensing unit to obtain direction information, the direction information can be the included angle of the gravity center direction of the mobile terminal relative to the long side direction or the short side direction of the mobile terminal, and the direction information is also the posture change information of the mobile terminal. The mobile terminal may further obtain relative position information between the mobile terminal and a holder through the distance sensing unit or the iris recognition unit, wherein the distance sensing unit and the iris recognition unit are generally disposed on the same plane of the mobile terminal display unit, and when a user holds the mobile terminal, a distance from the user may be detected through the distance sensing unit, or a relative orientation between the mobile terminal and the user's eyes may be recognized through the iris recognition unit.
In this embodiment, the image processing unit 52 obtains the point-of-interest information by associating the eye feature data included in the facial feature data with the relative position information of the mobile terminal itself and the direction information, where the point-of-interest information represents the focused position information of the eyes of the holding user of the mobile terminal on the mobile terminal, and may also be understood as the position information of the content browsed by the eyes of the holding user. Specifically, the information processing system may obtain gaze direction information of the eyes of the holding user based on the eye feature data, further determine a relative positional relationship between the mobile terminal and the holding user based on the relative positional information of the mobile terminal itself and the direction information, obtain a focus range in which the eyes of the holding user are focused on the mobile terminal based on the gaze direction information and the relative positional relationship, and generate the point-of-interest information based on the focus range.
In this embodiment, the information generating unit 53 obtains the corresponding second user experience parameter based on the combination of the focus information and the operation information in the preset time period. For example, when the range proportion of the change of the point of interest information (i.e. the focusing position of the eye holding the user on the mobile terminal) is greater than a first threshold and the operation times included in the operation information is greater than a second threshold within a preset time period t, the corresponding second user experience parameter is 0; when the range proportion of the change of the point-of-interest information (namely the focusing position of the eyes of the holding user on the mobile terminal) is larger than a third threshold and smaller than a first threshold (the first threshold is smaller than the third threshold), and the operation times contained in the operation information are smaller than a second threshold and larger than a fourth threshold (the fourth threshold is smaller than the second threshold), the corresponding second user experience parameter is 3; when the focus point information (i.e. the focusing position of the eye holding the user on the mobile terminal) is not changed, or the proportion of the range of the change is smaller than a third threshold, and the operation times included in the operation information is smaller than a fourth threshold, the corresponding second user experience parameter is 5. Of course, in other embodiments, the information processing system may also refer to other embodiments for obtaining the focus information by associating the eye feature data, the relative position information of the mobile terminal itself, and the direction information, that is, the information processing system may obtain the focusing position information of the eye of the holding user on the mobile terminal according to the eye feature data, the relative position information of the mobile terminal itself, and the direction information, and refer to any image recognition and modeling technology in the prior art, which is not described in detail in this embodiment. If in a scene, a user wants to find a function entrance through input operation, the user can find the position of the function entrance everywhere; the range proportion of the change of the user's attention point information obtained by the information processing system is greater than a first threshold; correspondingly, in the process that the user searches for the position of the function entrance, multiple times of trigger operation are carried out, namely the operation times included in the operation information obtained by the information processing system are greater than a second threshold value; in this scenario, it takes a long time for the user to find the function entry, which indicates that the application brings a poor operation experience to the user at the current operation position, and needs to be optimized or improved.
It should be understood by those skilled in the art that the functions of the processing modules in the information processing system according to the embodiment of the present invention may be understood by referring to the description of the information processing method, and the processing modules in the information processing system according to the embodiment of the present invention may be implemented by analog circuits that implement the functions described in the embodiment of the present invention, or by running software that performs the functions described in the embodiment of the present invention on an intelligent terminal.
EXAMPLE six
An embodiment of the present invention further provides an information processing system, and as shown in fig. 5, the system includes: an acquisition unit 51, an image processing unit 52, and an information generation unit 53; wherein the content of the first and second substances,
the acquiring unit 51 is configured to acquire operation information and image data; the operation information is obtained by detecting a trigger operation aiming at a display interface of a first application when the first application in the mobile terminal is in an activated state; the image data is acquired by an image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane; the mobile terminal is also used for obtaining the position information and the direction information of the mobile terminal;
the image processing unit 52 is configured to identify facial feature data in the image data obtained by the obtaining unit 51, and obtain facial expression information based on the facial feature data; the eye feature data in the facial feature data and the position information and the direction information of the mobile terminal obtained by the obtaining unit 51 are also associated to obtain the information of the point of interest; the focus point information represents focusing position information of eyes on the mobile terminal;
the information generating unit 53 is configured to generate first feedback information based on the facial expression information obtained by the image processing unit 52, the point of interest information, and the operation information obtained by the obtaining unit 51.
The information generating unit 53 is configured to obtain a corresponding third user experience parameter based on the facial expression information and the attention point information in a preset time period and by combining with operation information in the preset time period, and generate first feedback information of an operation position corresponding to the operation information based on the third user experience parameter.
In this embodiment, when the mobile terminal activates the first application, a display interface representing the first application is output, and a trigger operation for the display interface is detected to obtain operation information; the operation information comprises operation gesture information and operation position information; wherein the operation gesture information comprises: a single tap gesture, a double tap gesture, a swipe gesture, a drag gesture, a zoom gesture, a rotate gesture, a parameter (e.g., volume parameter, brightness parameter, etc.) adjust gesture, etc.; the operation position information is operation position information of the operation gesture; the operation position information can be specific to a function key. Further, the operational information may also be continuous operational information over a period of time. When the trigger operation aiming at the display interface is detected, the mobile terminal generates a first instruction, and an image acquisition unit of the mobile terminal is enabled based on the first instruction; in other embodiments, the mobile terminal may also enable the image acquisition unit of the mobile terminal when the first application is activated, or the mobile terminal may also enable the image acquisition unit of the mobile terminal based on the detected trigger instruction, which is not specifically limited in this embodiment. In this embodiment, the image acquisition unit and the display unit of the mobile terminal are on the same plane, and it can be understood that the image acquisition unit can be implemented by a front camera of the mobile terminal.
In this embodiment, the image processing unit 52 analyzes the image data, first performs preprocessing (such as denoising, normalization of pixel positions or illumination variables), and segmentation, localization, or tracking of a face. Further, facial feature data extraction is performed on the image data, including conversion of pixel data into representations of the shape, motion, color, muscle and spatial structure of the face and its components, and the extracted facial feature data is used for subsequent expression classification. Further, an expression classifier is preset in the information processing system, the expression classifier includes a plurality of sets of corresponding relations between facial feature data and expression information, or the expression classifier includes an expression classification model, the facial feature data is input into the expression classifier, and expression information corresponding to the facial feature data is output, that is, facial expression information is obtained based on the facial feature data. The face analysis method and the expression information recognition described in the present embodiment may refer to any analysis recognition method in the related art, and are not specifically described in the present embodiment.
In this embodiment, the obtaining unit 51 obtains the relative position information and the direction information of the mobile terminal; the relative position information is the relative position relationship between the mobile terminal and the holder. Specifically, at least one of the following sensing units is arranged in the mobile terminal: the mobile terminal comprises a gravity sensing unit, an acceleration sensing unit, a distance sensing unit, an iris recognition unit and the like, and specifically, the mobile terminal can pass through the gravity sensing unit or the acceleration sensing unit to obtain direction information, the direction information can be the included angle of the gravity center direction of the mobile terminal relative to the long side direction or the short side direction of the mobile terminal, and the direction information is also the posture change information of the mobile terminal. The mobile terminal may further obtain relative position information between the mobile terminal and a holder through the distance sensing unit or the iris recognition unit, wherein the distance sensing unit and the iris recognition unit are generally disposed on the same plane of the mobile terminal display unit, and when a user holds the mobile terminal, a distance from the user may be detected through the distance sensing unit, or a relative orientation between the mobile terminal and the user's eyes may be recognized through the iris recognition unit.
Further, the image processing unit 52 obtains the point-of-interest information, which represents the focused position information of the eyes of the holding user of the mobile terminal on the mobile terminal and can also be understood as the position information of the content browsed by the eyes of the holding user, based on the eye feature data included in the facial feature data and the relative position information and the direction information of the mobile terminal. Specifically, the information processing system may obtain gaze direction information of the eyes of the holding user based on the eye feature data, further determine a relative positional relationship between the mobile terminal and the holding user based on the relative positional information of the mobile terminal and the direction information, obtain a focus range in which the eyes of the holding user are focused on the mobile terminal based on the gaze direction information and the relative positional relationship, and generate the point-of-interest information based on the focus range.
In this embodiment, the information generating unit 53 obtains a corresponding third user experience parameter based on facial expression information in a preset time period and the focus information in the preset time period, in combination with the operation information. For example, within a preset time period t, the obtained facial expression information is pleasant, the focus point information (i.e., the focusing position of the eyes of the user on the mobile terminal) is not changed, or the proportion of the range of the change is smaller than a third threshold, and when the operation times included in the operation information is smaller than a fourth threshold, the corresponding second user experience parameter is 5; or, when the obtained facial expression information is calm, the range proportion of the change of the focus point information (i.e., the focus position of the eyes of the user on the mobile terminal) is greater than a third threshold and smaller than a first threshold (the first threshold is smaller than the third threshold), and the operation times included in the operation information is smaller than a second threshold and larger than a fourth threshold (the fourth threshold is smaller than the second threshold), the corresponding second user experience parameter is 3; or, if the obtained facial expression information is not full, the range proportion of the change of the focus point information (i.e., the focusing position of the eyes of the user on the mobile terminal) is greater than a first threshold, and the operation times included in the operation information is greater than a second threshold, the corresponding second user experience parameter is 0. Of course, the manner in which the information processing system obtains the third user experience parameter may also refer to other manners, which are not described in detail in this embodiment. If in a scene, a user wants to find a function entrance through input operation, the user can find the position of the function entrance everywhere; the range proportion of the change of the user's attention point information obtained by the information processing system is greater than a first threshold; correspondingly, in the process that the user searches for the position of the function entrance, multiple times of trigger operation are carried out, namely the operation times included in the operation information obtained by the information processing system are greater than a second threshold value; correspondingly, under the scene that the user spends a long time searching for the position of the function entrance, the user can expose unpleasant expressions; in this scenario, the information processing system determines that the current user experience is poor, that is, it takes a long time for the user to find the function entry, based on the analysis of the obtained facial expression information, the point of interest information, and the operation information, indicating that the application brings a poor operation experience to the user at the current operation position, and needs to be optimized or improved.
It should be understood by those skilled in the art that the functions of the processing modules in the information processing system according to the embodiment of the present invention may be understood by referring to the description of the information processing method, and the processing modules in the information processing system according to the embodiment of the present invention may be implemented by analog circuits that implement the functions described in the embodiment of the present invention, or by running software that performs the functions described in the embodiment of the present invention on an intelligent terminal.
In the fourth to sixth embodiments of the present invention, the image Processing Unit 52 and the information generating Unit 53 in the information Processing system may be implemented by a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or a Programmable Gate Array (FPGA) in the system in practical application; the obtaining unit 51 in the information processing system may be implemented by a transceiver or a transceiver in the system in practical application.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (8)

1. An information processing method, characterized in that the method comprises:
obtaining operation information and image data; the operation information is obtained by detecting a trigger operation aiming at a display interface of a first application when the first application in the mobile terminal is in an activated state; the image data is acquired by an image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane;
identifying facial feature data in the image data, and obtaining facial expression information based on the facial feature data;
obtaining relative position information and direction information of the mobile terminal;
associating the eye feature data, the relative position information and the direction information in the face feature data to obtain the information of the attention point; specifically, gaze direction information of eyes is obtained based on the eye feature data, a relative position relationship between the mobile terminal and the eyes is determined based on the relative position information and the direction information, a focusing range of the eyes focused on the mobile terminal is obtained based on the gaze direction information and the relative position relationship, and the point-of-interest information is generated based on the focusing range; the focus point information represents focusing position information of eyes on the mobile terminal;
generating first feedback information based on the facial expression information, the range of change in the point of interest information, and the operation information.
2. The method of claim 1, further comprising:
and generating second feedback information based on the point of interest information and the operation information.
3. The method of claim 2, wherein generating second feedback information based on the point of interest information and the operation information comprises:
and obtaining a corresponding second user experience parameter based on the attention point information in a preset time period and the operation information in the preset time period, and generating second feedback information of an operation position corresponding to the operation information based on the second user experience parameter.
4. The method of claim 1, wherein the generating first feedback information based on the facial expression information, the point of interest information, and the operation information comprises:
and obtaining a corresponding third user experience parameter by combining operation information in a preset time period based on the facial expression information and the attention point information in the preset time period, and generating first feedback information of an operation position corresponding to the operation information based on the third user experience parameter.
5. An information processing system, the system comprising: an acquisition unit, an image processing unit and an information generation unit; wherein the content of the first and second substances,
the acquisition unit is used for acquiring operation information and image data; the operation information is obtained by detecting a trigger operation aiming at a display interface of a first application when the first application in the mobile terminal is in an activated state; the image data is acquired by an image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane; the system is also used for obtaining the relative position information and the direction information of the mobile terminal;
the image processing unit is used for identifying facial feature data in the image data and obtaining facial expression information based on the facial feature data; the face feature data acquisition unit is further configured to associate eye feature data in the face feature data, the relative position information of the mobile terminal obtained by the acquisition unit and the direction information to obtain point-of-interest information; specifically, gaze direction information of eyes is obtained based on the eye feature data, a relative position relationship between the mobile terminal and the eyes is determined based on the relative position information and the direction information, a focusing range of the eyes focused on the mobile terminal is obtained based on the gaze direction information and the relative position relationship, and the point-of-interest information is generated based on the focusing range; the focus point information represents focusing position information of eyes on the mobile terminal;
the information generating unit is used for generating first feedback information based on the facial expression information obtained by the image processing unit, the change range of the focus point information and the operation information obtained by the obtaining unit.
6. The system of claim 5, wherein the information generating unit is further configured to generate second feedback information based on the point of interest information and the operation information.
7. The system according to claim 6, wherein the information generating unit is configured to obtain a corresponding second user experience parameter based on the point of interest information in a preset time period and the operation information in the preset time period, and generate second feedback information of an operation position corresponding to the operation information based on the second user experience parameter.
8. The system according to claim 5, wherein the information generating unit is configured to obtain a corresponding third user experience parameter based on the facial expression information and the point of interest information in a preset time period in combination with the operation information in the preset time period, and generate the first feedback information of the operation position corresponding to the operation information based on the third user experience parameter.
CN201510869366.7A 2015-12-02 2015-12-02 Information processing method and system Active CN106815264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510869366.7A CN106815264B (en) 2015-12-02 2015-12-02 Information processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510869366.7A CN106815264B (en) 2015-12-02 2015-12-02 Information processing method and system

Publications (2)

Publication Number Publication Date
CN106815264A CN106815264A (en) 2017-06-09
CN106815264B true CN106815264B (en) 2020-08-04

Family

ID=59107979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510869366.7A Active CN106815264B (en) 2015-12-02 2015-12-02 Information processing method and system

Country Status (1)

Country Link
CN (1) CN106815264B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108848416A (en) * 2018-06-21 2018-11-20 北京密境和风科技有限公司 The evaluation method and device of audio-video frequency content
US11698674B2 (en) * 2019-09-09 2023-07-11 Apple Inc. Multimodal inputs for computer-generated reality

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104462468A (en) * 2014-12-17 2015-03-25 百度在线网络技术(北京)有限公司 Information supply method and device
CN104699769A (en) * 2015-02-28 2015-06-10 北京京东尚科信息技术有限公司 Interacting method based on facial expression recognition and equipment executing method
CN104881350A (en) * 2015-04-30 2015-09-02 百度在线网络技术(北京)有限公司 Method and device for confirming user experience and method and device for assisting in user experience confirmation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104462468A (en) * 2014-12-17 2015-03-25 百度在线网络技术(北京)有限公司 Information supply method and device
CN104699769A (en) * 2015-02-28 2015-06-10 北京京东尚科信息技术有限公司 Interacting method based on facial expression recognition and equipment executing method
CN104881350A (en) * 2015-04-30 2015-09-02 百度在线网络技术(北京)有限公司 Method and device for confirming user experience and method and device for assisting in user experience confirmation

Also Published As

Publication number Publication date
CN106815264A (en) 2017-06-09

Similar Documents

Publication Publication Date Title
JP6929366B2 (en) Driver monitoring and response system
EP3284011B1 (en) Two-dimensional infrared depth sensing
US10223838B2 (en) Method and system of mobile-device control with a plurality of fixed-gradient focused digital cameras
US20190188903A1 (en) Method and apparatus for providing virtual companion to a user
CN113015984A (en) Error correction in convolutional neural networks
CN103353935A (en) 3D dynamic gesture identification method for intelligent home system
CN104919396B (en) Shaken hands in head mounted display using body
CN111259751A (en) Video-based human behavior recognition method, device, equipment and storage medium
CN111009031B (en) Face model generation method, model generation method and device
CN106293102A (en) A kind of robot affective interaction method based on user mood change emotion
CN112016367A (en) Emotion recognition system and method and electronic equipment
CN106354264A (en) Real-time man-machine interaction system based on eye tracking and a working method of the real-time man-machine interaction system
Vu et al. Emotion recognition based on human gesture and speech information using RT middleware
CN112632349A (en) Exhibition area indicating method and device, electronic equipment and storage medium
Kumarage et al. Real-time sign language gesture recognition using still-image comparison & motion recognition
US20180199876A1 (en) User Health Monitoring Method, Monitoring Device, and Monitoring Terminal
CN106815264B (en) Information processing method and system
CN107452381B (en) Multimedia voice recognition device and method
EP3200092A1 (en) Method and terminal for implementing image sequencing
US10664689B2 (en) Determining user activity based on eye motion
Delabrida et al. Towards a wearable device for monitoring ecological environments
KR102395410B1 (en) System and method for providing sign language avatar using non-marker
WO2023146963A1 (en) Detecting emotional state of a user based on facial appearance and visual perception information
JP2021026744A (en) Information processing device, image recognition method, and learning model generation method
CN106296722B (en) Information processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant