WO2020211701A1 - 模型训练方法、情绪识别方法及相关装置和设备 - Google Patents

模型训练方法、情绪识别方法及相关装置和设备 Download PDF

Info

Publication number
WO2020211701A1
WO2020211701A1 PCT/CN2020/084216 CN2020084216W WO2020211701A1 WO 2020211701 A1 WO2020211701 A1 WO 2020211701A1 CN 2020084216 W CN2020084216 W CN 2020084216W WO 2020211701 A1 WO2020211701 A1 WO 2020211701A1
Authority
WO
WIPO (PCT)
Prior art keywords
emotional state
touch mode
touch
user
terminal device
Prior art date
Application number
PCT/CN2020/084216
Other languages
English (en)
French (fr)
Inventor
李向东
田艳
王剑平
张艳存
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2020211701A1 publication Critical patent/WO2020211701A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Definitions

  • This application relates to the field of computer technology, and specifically relates to model training methods, emotion recognition methods, and related devices and equipment.
  • terminal devices such as smart phones and tablet computers play an increasingly important role in people’s daily lives.
  • the user experience brought by terminal devices has become a key factor for users to measure terminal devices.
  • How to provide users with personalized services Improving user experience has become the focus of research and development of various terminal equipment manufacturers.
  • some terminal devices have been developed that can provide users with personalized services from the perspective of identifying users’ emotional states and provide users with personalized services based on the identified user’s emotional states. Whether they can provide users with reasonable personalized services depends mainly on the users.
  • the accuracy of emotional state recognition At present, the more common methods for recognizing emotions based on facial expressions, as the ambient light, the relative position between the user's face and the terminal device and other factors change, the accuracy of facial expression recognition will also change, that is, this method cannot guarantee accurate recognition The facial expression of the user will in turn lead to inaccurate recognition of the emotional state based on the facial expression.
  • the embodiments of the present application provide a model training method, an emotion recognition method, and related devices and equipment, which can accurately recognize the emotional state of a user based on the trained emotion recognition model, so that the terminal device is convenient for the terminal device according to the recognized emotional state. Users provide more reasonable personalized services.
  • the first aspect of this application provides a model training method, which can be applied to terminal devices and servers.
  • the touch mode when the user manipulates the terminal device is obtained, and the corresponding touch mode is marked Emotional state, the touch mode and the emotional state corresponding to the touch mode are used as training samples; further, machine learning technology (MLT) is used to train the preset classification model using the above training samples to obtain
  • MKT machine learning technology
  • the above-mentioned model training method can be aimed at different terminal devices, correspondingly based on the touch mode of the terminal device when the user of the terminal device controls the terminal device and the emotional state corresponding to the touch mode, using machine learning algorithms to train targeted and suitable
  • the emotion recognition model of the terminal device in this way, applying the emotion recognition model trained for it on the terminal device can ensure that the emotion recognition model can accurately be based on the touch mode of the terminal device when the user of the terminal device manipulates the terminal device. Determine the emotional state of the user.
  • the reference time interval may be determined according to the trigger time corresponding to the touch method, and then the The content of the operation data generated by the user operating the terminal device in the reference time interval, such as text content, voice content, etc. input by the user to the terminal device, and then, by analyzing the content of the operation data obtained in the reference time interval, determine the content corresponding to the operation data
  • the emotional state is the emotional state corresponding to the touch mode.
  • a preset emotional state mapping relationship table can be called, and the emotional state mapping relationship table records touch The corresponding relationship between the control mode and the emotional state; furthermore, the emotional state corresponding to the touch mode is searched in the emotional state mapping relationship table.
  • the emotional state mapping table is generated, and the emotional state corresponding to the touch mode is determined based on the emotional state mapping table, which can effectively ensure that the emotional state determined for the touch mode is objective and reasonable.
  • the touch data generated by the terminal device controlled by the user may be collected within a preset time period, and then the touch data may be clustered. Generate the touch data set, and determine the touch mode corresponding to the touch data set, and then use the touch data set that includes the most touch data as the target touch data set, and the touch data corresponding to the target touch data set As the target touch method, the emotional state corresponding to the target touch method is marked, and the target touch method and its corresponding emotional state are used as training samples.
  • the method selects the touch method that best characterizes the current emotional state of the user from the multiple touch methods used by the user during the period of time, that is, the above-mentioned target touch method, and then uses the target touch method and the user’s current mood
  • the state is used as a training sample, which can effectively ensure that the corresponding relationship between the touch mode and the emotional state is accurate and reasonable.
  • the touch data mentioned in the above third implementation manner includes: screen capacitance value change data and coordinate value change data. Since most of the touch screens currently used in touch screen devices are capacitive screens, using screen capacitance change data and coordinate value change data as touch data can ensure that the method provided by the embodiments of this application can be widely used in daily work and life. application.
  • the emotion recognition model after the emotion recognition model is trained, it is possible to further obtain the touch mode when the user subsequently manipulates the terminal device as an optimized touch mode, and mark the optimization
  • the emotional state corresponding to the touch mode uses the optimized touch mode and its corresponding emotional state as the optimized training sample; so that the optimized training sample can be subsequently used to optimize the emotion recognition model.
  • the touch method used by the user to touch the terminal device may also change.
  • the emotion recognition model In order to ensure that the emotion recognition model can always accurately recognize the user’s emotional state according to the user’s touch method, the emotion recognition is obtained during training. After the model, it can also continuously collect the touch mode and the corresponding emotional state of the user when the user controls the terminal device, as an optimized training sample, and then, when the emotional recognition model cannot accurately identify the user's emotional state, the optimized training sample can be used
  • the emotion recognition model is further optimized and trained to ensure that it always has better model performance.
  • user feedback information for the emotion recognition model can be obtained, and when the feedback information indicates that the performance of the emotion recognition model does not meet the needs of the user, the fifth implementation
  • the optimized training samples obtained in the method optimize the training of the emotion recognition model.
  • the target of the emotion recognition model in this application is the user of the terminal device
  • the user experience can be used as one of the most important criteria to measure the performance of the emotion recognition model, and the performance of the emotion recognition model cannot be When meeting their own needs, that is, when the user thinks that the emotional state recognized by the emotion recognition model is not accurate enough
  • the optimized training samples obtained in the fifth implementation method can be used to optimize the training of the emotion recognition model to make it Meet user needs and improve user experience.
  • the terminal device can satisfy any one of the three conditions of being in a charging state, having a remaining power greater than a preset power, and being in an idle state for a time exceeding a preset time.
  • the optimized training samples obtained in the fifth implementation manner are used to optimize the training of the emotion recognition model.
  • the terminal device can meet any of the above three conditions Under one or more conditions, optimize the training of the emotion recognition model to ensure the user experience.
  • the second aspect of the present application provides an emotion recognition method, which is usually applied to a terminal device.
  • the terminal device obtains the touch mode when the user manipulates itself, and uses the emotion recognition model that it runs to determine the touch
  • the emotional state corresponding to the mode is the current emotional state of the user, and the emotional recognition model is obtained by training the terminal device using the model training method provided in the first aspect.
  • the emotion recognition method uses an emotion recognition model to specifically determine the user's emotional state according to the touch mode when the user controls the terminal device, and can ensure the accuracy of the determined emotional state; and the method is useful in determining the user's emotional state. In the process, there is no need for any additional external equipment, and the purpose of improving user experience is truly achieved.
  • the terminal device can switch the display style of the desktop interface according to the current emotional state of the user identified by the emotion recognition model when the terminal device itself displays the desktop interface. In this way, the terminal device directly changes the user's visual experience by changing the display style of its desktop interface, and adjusts or matches the user's emotional state from the visual perception, thereby improving the user's experience.
  • the terminal device can recommend relevant content through the application according to the user’s current emotional state identified by the emotion recognition model when the application is opened by itself, For example, recommend relevant music content, video content, text content, etc.
  • relevant content is recommended for the user through a corresponding application, and the user's emotional state is adjusted in real time from multiple angles, and the user experience is improved.
  • the third aspect of the present application provides a model training device, which includes:
  • the training sample acquisition module is used to acquire the touch mode when the user manipulates the terminal device, mark the emotional state corresponding to the touch mode; use the touch mode and the emotional state corresponding to the touch mode as training samples;
  • the model training module is used to use machine learning algorithms to train the classification model using the training samples to obtain an emotion recognition model; the emotion recognition model uses the touch mode when the user controls the terminal device as input, and uses the touch The emotional state corresponding to the control mode is output.
  • the training sample acquisition module is specifically configured to:
  • the emotional state of the user is determined according to the content of the operation data as the emotional state corresponding to the touch mode.
  • the training sample acquisition module is specifically configured to:
  • the emotional state mapping relationship table records the corresponding relationship between the touch mode and the emotional state
  • the training sample acquisition module is specifically configured to:
  • the target touch mode and the emotional state corresponding to the target touch mode are used as training samples.
  • the touch data includes: screen capacitance value change data and coordinate value change data.
  • the device further includes:
  • the optimized training sample acquisition module is used to acquire the touch mode when the user manipulates the terminal device as an optimized touch mode; mark the emotional state corresponding to the optimized touch mode; compare the optimized touch mode with the optimized touch mode
  • the emotional state corresponding to the touch mode is used as an optimized training sample; the optimized training sample is used to perform optimization training on the emotion recognition model.
  • the device further includes:
  • the feedback information acquisition module is used to acquire user feedback information for the emotion recognition model; the feedback information is used to characterize whether the performance of the emotion recognition model meets user needs;
  • the first optimization training module is configured to use the optimized training sample to perform optimization training on the emotion recognition model when the feedback information indicates that the performance of the emotion recognition model does not meet the user requirements.
  • the device further includes:
  • the second optimization training module is used for when the terminal device is in a charging state, and/or when the remaining power of the terminal device is higher than a preset power, and/or when the terminal device is in an idle state
  • the optimized training sample is used to optimize the training of the emotion recognition model.
  • a fourth aspect of the present application provides an emotion recognition device, which includes:
  • the touch mode acquisition module is used to obtain the touch mode when the user manipulates the terminal device
  • the emotional state recognition module is configured to use the emotional recognition model to determine the emotional state corresponding to the touch mode as the current emotional state of the user; the emotional recognition model is trained by executing the model training method described in the first aspect.
  • the device further includes:
  • the display style switching module is configured to switch the display style of the desktop interface according to the current emotional state of the user when the terminal device displays the desktop interface.
  • the device further includes:
  • the content recommendation module is configured to recommend related content through the application according to the current emotional state of the user when the terminal device starts the application.
  • a fifth aspect of the present application provides a server, which includes a processor and a memory:
  • the memory is used to store program code and transmit the program code to the processor
  • the processor is configured to execute the model training method described in the first aspect above according to instructions in the program code.
  • a sixth aspect of the present application provides a terminal device, the terminal device including a processor and a memory:
  • the memory is used to store program code and transmit the program code to the processor
  • the processor is configured to execute the model training method described in the first aspect and/or execute the emotion recognition method described in the second aspect according to instructions in the program code.
  • the seventh aspect of the present application provides a computer-readable storage medium, including instructions, which when run on a computer, cause the computer to execute the model training method described in the first aspect, and/or execute the emotion described in the second aspect recognition methods.
  • FIG. 1 is a schematic diagram of application scenarios of the model training method and the emotion recognition method provided by the embodiments of the application;
  • FIG. 2 is a schematic flowchart of a model training method provided by an embodiment of this application.
  • FIG. 3 is a schematic flowchart of an emotion recognition method provided by an embodiment of this application.
  • FIG. 4 is a schematic structural diagram of a model training device provided by an embodiment of the application.
  • FIG. 5 is a schematic structural diagram of an emotion recognition device provided by an embodiment of this application.
  • FIG. 6 is a schematic structural diagram of a server provided by an embodiment of the application.
  • FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of this application.
  • FIG. 8 is a block diagram of the software structure of an electronic device provided by an embodiment of the application.
  • terminal equipment In order to further improve the user experience that terminal equipment brings to users, and provide users with more intimate and personalized services, some terminal equipment manufacturers have developed terminal equipment to identify user emotional state from the perspective of identifying user emotional state. Function, the more common methods currently applied to terminal devices to recognize emotional states include the following three:
  • Facial expression recognition method uses the camera device on the terminal device to collect the user's facial expression, and then analyzes and processes the user's facial expression to determine the user's emotional state; because of the different lighting in different scenes, and the user's face and the terminal device The relative position between them is unstable. This method cannot guarantee that the user’s facial expressions are accurately recognized under various conditions. Accordingly, when the accuracy of the user’s facial expression recognition is low, the recognition of the user’s emotional state cannot be guaranteed. Accuracy.
  • the voice recognition method uses terminal equipment to collect the voice content input by the user, and analyzes and processes the voice content to determine the user's emotional state; this method requires the user to actively input voice content expressing the emotional state into the terminal device, and then the terminal device can The user's emotional state can be determined accordingly; in most cases, the user will not actively inform the terminal device of its own emotional state. It can be seen that this method has low application value in practical applications.
  • the terminal device collects human body physiological signals, such as heart rate, body temperature, blood pressure, etc., through additional measuring devices or sensors, and then the terminal device analyzes and processes the collected human body physiological signals to determine the user's emotional state; this
  • This method requires additional external devices in the process of implementation, and the use of external devices is usually more cumbersome for users, that is, it will bring a bad user experience to the user from another aspect, and does not really improve the user experience. the goal of.
  • this application takes a different approach, based on the current higher and higher screen-to-body ratio of the terminal device, and users frequently pass through
  • the phenomenon that the touch screen (touch pad, TP) interacts with the terminal device uses the characteristic that the touch mode of the terminal device when the user manipulates the terminal device in the same emotional state has similar laws, and trains a method based on when the user controls the terminal device.
  • the touch mode determines the model of the user's current emotional state, so that the terminal device can use the model to recognize the user's emotional state and provide the user with reasonable personalized services.
  • the terminal device obtains the touch mode when the user manipulates the terminal device, and marks the emotional state corresponding to the touch mode, and uses the touch mode and its corresponding emotional state as training Samples; and then using machine learning algorithms, using the above training samples to train the classification model to obtain the emotion recognition model.
  • the terminal device obtains the touch mode when the user manipulates the terminal device, and then uses the emotion recognition model trained through the above model training method to determine the obtained touch mode Corresponding to the emotional state, use the emotional state as the current emotional state of the user.
  • the model training method will target different terminal devices, and accordingly, based on the touch mode of the terminal device when the user of the terminal device controls the terminal device and the emotional state corresponding to the touch mode.
  • the learning algorithm trains an emotion recognition model that is specifically applicable to the terminal device; in this way, applying the trained emotion recognition model to the terminal device can ensure that the emotion recognition model can accurately be based on the terminal device user
  • the touch mode when manipulating the terminal device determines the emotional state of the user.
  • the method provided in the embodiments of the present application can use the emotion recognition model to determine the user's emotional state according to the touch mode when the user controls the terminal device, and ensure that all The accuracy of the determined emotional state; in addition, the method provided in the embodiment of the present application does not require any additional external devices in the process of determining the user's emotional state, which in a true sense achieves the purpose of improving user experience.
  • model training method and emotion recognition method can be applied to terminal devices (also referred to as electronic devices) and servers equipped with touch screens; among them, the terminal devices can specifically be smart phones, tablet computers, and computers. , Personal Digital Assistant (Personal Digital Assistant, PDA), etc.; the server can be an application server or a Web server.
  • terminal devices also referred to as electronic devices
  • servers equipped with touch screens among them, the terminal devices can specifically be smart phones, tablet computers, and computers. , Personal Digital Assistant (Personal Digital Assistant, PDA), etc.; the server can be an application server or a Web server.
  • the following takes a terminal device as an execution subject as an example to introduce the application scenarios of the model training method and the emotion recognition method provided in the embodiments of the present application.
  • FIG. 1 is a schematic diagram of an application scenario of the model training method and the emotion recognition method provided by an embodiment of the application.
  • the application scenario includes a terminal device 101, which is used to execute the model training method provided in the embodiment of the application to train the emotion recognition model, and to execute the emotion recognition method provided in the embodiment of the application Recognize the user's emotional state.
  • the terminal device 101 obtains the touch mode when the user manipulates itself.
  • the touch mode may specifically include: tapping operations and sliding operations with different strengths and/or different frequencies; the mark obtained corresponds to the touch mode Emotional state; further, the acquired touch mode and its corresponding emotional state are used as training samples.
  • the terminal device 101 uses a machine learning algorithm to train the pre-built classification model in the terminal device 101 using the obtained training sample, thereby obtaining an emotion recognition model.
  • the terminal device 101 executes the emotion recognition method provided in the embodiment of this application, and uses the emotion recognition model trained in the model training stage to recognize the user’s emotional state; specifically, the terminal device 101 obtains the user’s touch when manipulating itself.
  • the control method uses the emotion recognition model to determine the acquired emotional state corresponding to the touch control method as the current emotional state of the user.
  • the terminal device 101 is an emotion recognition model trained based on the touch mode of its own user when manipulating itself and its corresponding emotional state.
  • the emotion recognition model is specifically applicable to the terminal
  • the terminal device 101 uses the emotion recognition model trained in the model training stage to determine the user’s current emotional state according to the touch mode when the user manipulates itself, which can effectively ensure that the determined The accuracy of your emotional state.
  • the application scenario shown in FIG. 1 is only an example.
  • the model training method and emotion recognition method provided in the embodiments of the present application can also be applied to other application scenarios.
  • the application scenarios of the provided model training methods and emotion recognition methods are specifically limited.
  • FIG. 2 is a schematic flowchart of a model training method provided by an embodiment of the application. As shown in Figure 2, the model training method includes the following steps:
  • Step 201 Obtain the touch mode when the user manipulates the terminal device, mark the emotional state corresponding to the touch mode; use the touch mode and the emotional state corresponding to the touch mode as training samples.
  • the terminal device When the user manipulates itself, the terminal device obtains the touch mode when the user touches the touch screen.
  • This touch mode can also be understood as a touch operation.
  • the touch operation can specifically be a single-step touch operation initiated by the user on the touch screen. Click operations under different forces, sliding operations under different forces, etc., can also be continuous touch operations initiated by the user on the touch screen, such as continuous click operations at different frequencies, continuous sliding operations at different frequencies, etc.
  • the other touch operations used can also be regarded as the touch methods in this application, and the touch methods in this application are not specifically limited here.
  • the terminal device marks the acquired emotional state corresponding to the touch mode, and the emotional state is the emotional state when the user initiates the touch mode; the touch mode and the emotional state corresponding to the touch mode are used as training samples.
  • the touch method usually needs to be determined based on the touch data generated when the user touches the touch screen; for capacitive screens, the touch data usually includes screen capacitance value change data and screen coordinate value change data, where the screen
  • the capacitance value change data can characterize the force when the user clicks or slides the touch screen, and the contact area between the user and the touch screen when the user clicks or slides the touch screen. The greater the user clicks or slides the touch screen, the greater the change in the screen capacitance value.
  • the screen coordinate value change data is actually determined according to the screen capacitance value change data, and the screen coordinate value change data can represent when the user taps the touch screen
  • the bottom driver of the terminal device will report the change of screen capacitance value to the processor of the terminal device through the input subsystem when the user touches the touch screen of the terminal device.
  • the data and its corresponding position coordinates can be determined by recording the continuously changing position coordinates to determine the sliding direction and sliding distance.
  • touch data For other types of touch screens, the user touching the touch screen will correspondingly generate other touch data.
  • the resistive screen the user touching the resistive screen will correspondingly generate screen resistance value change data and screen coordinate values. Change data, these data can correspondingly reflect the current touch mode of the user, and the specific type of touch data is not limited here.
  • the terminal device can collect the touch data generated by the user operating the terminal device within a preset time period; clustering the collected touch data to generate the touch data Set, and determine the touch mode corresponding to the touch data set; take the touch data set that includes the most touch data as the target touch data set, and use the touch mode corresponding to the target touch data set as the target touch mode, Then mark the emotional state corresponding to the target touch mode; finally, use the target touch mode and the emotional state corresponding to the target touch mode as training samples.
  • the user usually manipulates the terminal device multiple times within a preset time period.
  • the terminal device can collect multiple touch data; cluster the touch data with similar characteristics, for example, the change range can be Cluster the similar screen capacitance change data together, cluster the corresponding screen capacitance change data with similar click positions, cluster the screen coordinate change data with similar sliding trajectories, etc., A number of touch data sets are thus obtained; further, according to the type of touch data in each touch set, the corresponding touch mode of each touch set is marked accordingly. For example, if the change amplitude exceeds the preset amplitude threshold The touch data set composed of the touch data can be marked as heavy click.
  • the corresponding touch For the touch data set composed of touch data whose change frequency exceeds the preset frequency threshold, the corresponding touch can be marked
  • the control method is frequent clicks.
  • the corresponding touch method For a touch data set composed of screen coordinate value change data whose change frequency exceeds a preset frequency threshold, the corresponding touch method can be marked as frequent sliding, and so on.
  • the touch data set including the most touch data is the target touch data set, and the touch mode corresponding to the target touch data set is correspondingly used as the target touch mode; according to the terminal within the preset time period
  • the content of operation data collected by the device that can characterize the emotional state of the user, and/or determine the emotional state corresponding to the target touch mode according to the correspondence between the touch mode and the emotional state recorded in the emotional state mapping table; finally, The target touch mode and its corresponding emotional state are used as training samples.
  • the collected target touch modes can be The category determines the touch mode that can be recognized by the emotion recognition model, and the emotional state that can be determined by the emotion recognition model is determined based on the emotional state corresponding to each target touch mode.
  • this application provides the following two realization methods:
  • the terminal device determines the reference time interval according to the trigger time corresponding to the touch mode; obtains the operation data content generated by the user operating the terminal device in the reference time interval; and then determines the user's emotional state according to the operation data content, As the emotional state corresponding to the touch mode.
  • the terminal device may determine the trigger time corresponding to the touch mode, use the trigger time as the center point, and determine the reference time interval according to the preset reference time interval length; in addition, the terminal device may also determine the trigger time corresponding to the touch mode As the starting point or ending point, the reference time interval is determined according to the preset reference time interval length.
  • the terminal device may also use other methods to determine the reference time interval according to the trigger time corresponding to the touch mode. The reference time interval is not determined here. The way to make any restrictions.
  • the length of the reference time interval can be set according to actual requirements, and the length of the reference time interval is not specifically limited here.
  • the terminal device obtains the operation data content generated by the user operating the terminal device during the reference time interval.
  • the operation data content is related data content generated by the user controlling the terminal device.
  • the operation data content may specifically be the user Input the text content of the terminal device in the reference time interval, it can also be the user input the voice content of the terminal device in the reference time interval, or it can be other operation data content generated by the user through the application on the terminal device, here There is no restriction on the type of operation data content.
  • the terminal device can analyze and process the content of the operating data to determine the emotional state corresponding to the content of the operating data; for example, when the content of the operating data is text input by the user, the terminal device can use Perform semantic analysis on the text content to determine its emotional state; when the content of the operating data is the voice content input by the user, the terminal device can determine the corresponding emotional state by performing voice recognition analysis on the voice content; when the content of the operating data is other In the case of the form of data content, the terminal device can also use other methods to determine its corresponding emotional state, and the manner of determining the emotional state corresponding to the operation data content is not limited here. Finally, the emotional state corresponding to the content of the operation data is used as the emotional state corresponding to the touch mode. .
  • the preset time period can be directly used as the reference time interval, and then the user controls the terminal according to the preset time period.
  • the content of the operating data generated by the device determines the emotional state corresponding to the content of the operating data as the emotional state corresponding to the target touch mode.
  • the terminal device needs to obtain the user's permission before obtaining the operation data content.
  • the terminal device can only obtain the operation data content generated by the user controlling the terminal device when the user allows the terminal device to obtain the operation data content. And based on the obtained operation data content, the corresponding emotional state is marked in a touch mode; and after obtaining the operation data content, the terminal device also needs to encrypt and store the obtained operation data content to ensure the user's data privacy and security.
  • the terminal device calls a preset emotional state mapping relationship table; the emotional state mapping relationship table records the corresponding relationship between the touch mode and the emotional state; further, the emotional state mapping relationship table is searched for touch The emotional state corresponding to the control method.
  • the terminal device After acquiring the touch mode when the user manipulates the terminal device, the terminal device can call the emotional state mapping relationship table preset by itself, and then search for the emotional state corresponding to the acquired touch mode in the emotional state mapping relationship table .
  • the emotional state corresponding to the target touch mode can be found in the emotional state mapping relationship table.
  • the touch mode determined in this way and the emotional state can be further used.
  • the corresponding relationship between the emotional state mapping table is optimized and updated to continuously enrich the mapping relationship recorded in the emotional state mapping table.
  • the first method or the second method described above can be used alone to mark the emotional state corresponding to the touch mode, or the first method and the second method described above can be combined to mark the touch
  • the emotional state corresponding to the touch mode that is, when the emotional state corresponding to the touch mode cannot be accurately determined using the first method
  • the second method can be used to determine the emotional state corresponding to the touch mode, or the second method can be used
  • the first method is used to determine the emotional state corresponding to the touch method
  • the emotional state corresponding to the touch method can also be determined according to the emotional state determined by the two methods. .
  • the emotional state often expressed is basically specific, and the touch method used by the touch terminal device in a specific emotional state is also specific; training samples are collected based on the above method , It can be ensured that most of the touch methods included in the collected training samples are the touch methods frequently used by users, and the emotional states corresponding to the touch methods are also the emotional states frequently expressed by users. Accordingly, it can be guaranteed based on these
  • the emotion recognition model trained by the training sample can more sensitively determine the emotional state of the user frequently according to the touch method that the user often uses when touching the terminal device.
  • Step 202 Use a machine learning algorithm to train the classification model using the training samples to obtain an emotion recognition model; the emotion recognition model uses the touch mode when the user controls the terminal device as input, and the touch mode corresponds to The emotional state is output.
  • the terminal device can use the machine learning algorithm to train the classification model preset in the terminal device with the obtained training samples to continuously perform the model parameters of the classification model. After the classification model satisfies the training end condition, the emotion recognition model is generated according to the model structure and model parameters of the classification model.
  • the terminal device can input the touch mode in the training sample into the classification model, and the classification model analyzes and processes the touch mode, outputs the emotional state corresponding to the touch mode, and outputs according to the classification model
  • the emotional state of the training sample and the emotional state corresponding to the touch mode in the training sample construct a loss function, and then adjust the model parameters in the classification model according to the loss function to achieve optimization of the classification model.
  • the emotion recognition model can be generated according to the model structure and model parameters of the current classification model.
  • the first model can be verified with a test sample.
  • the test sample is similar to the training sample, including the touch mode and the emotional state corresponding to the touch mode.
  • the first model uses A model obtained by performing the first round of training optimization on the classification model with multiple training samples; specifically, the terminal device inputs the touch mode in the test sample into the first model, and uses the first model to process the touch mode accordingly, Obtain the emotional state corresponding to the touch mode; further, calculate the prediction accuracy rate according to the emotional state corresponding to the touch mode in the test sample and the emotional state output by the first model, and when the prediction accuracy rate is greater than a preset threshold, If it is considered that the model performance of the first model can meet the demand, the emotion recognition model can be generated according to the model parameters and model structure of the first model.
  • preset threshold may be set according to actual conditions, and the preset threshold is not specifically limited here.
  • test samples can be used to verify multiple classification models obtained through multiple rounds of training.
  • the terminal device can also determine whether the classification model meets the training end condition according to the user's feedback information. Specifically, the terminal device may prompt the user to test and use the classification model being trained, and feedback the feedback information for the classification model accordingly. If the user feedback information for the classification model indicates that the current performance of the classification model still cannot satisfy the user The current demand, the terminal device needs to use training samples to continue to optimize the training of the classification model; on the contrary, if the user feedback information for the classification model indicates that the current performance of the classification model is good and basically meets the current needs of the user, the terminal The device can generate an emotion recognition model according to the model structure and model parameters of the classification model.
  • the touch method of the user's touch terminal device may change as the use time increases. Therefore, after the emotion recognition model is trained, the terminal device can continue to collect optimized training samples and use the collected The optimized training samples of the Emotion Recognition Model are further optimized and trained to optimize the performance of the Emotion Recognition Model so that it can more accurately determine the user’s emotional state according to the user’s touch mode.
  • the terminal device can continue to obtain the touch mode when the user manipulates the terminal device as an optimized touch mode; mark the emotional state corresponding to the optimized touch mode, and specifically mark the emotional state method
  • the optimized touch mode and the emotional state corresponding to the optimized touch mode are used as optimized training samples, and the optimized training samples are used for optimization training of the emotion recognition model.
  • the terminal device may initiate optimization training of the emotion recognition model in response to user feedback information. That is, the terminal device can obtain user feedback information for the emotion recognition model, and the feedback information is used to characterize whether the performance of the emotion recognition model meets the user needs; when the obtained feedback information characterizes the performance of the emotion recognition model does not meet the user demand , Using optimized training samples to optimize training the emotion recognition model.
  • the terminal device may periodically initiate feedback information acquisition operations.
  • the terminal device may periodically display an emotion recognition model feedback information acquisition interface to obtain user feedback information for the emotion recognition model through the interface; of course, the terminal device may also use other Ways to obtain feedback information, this does not limit the way of obtaining feedback information.
  • the terminal device After the terminal device obtains the feedback information, if it is determined that the feedback information represents that the current performance of the emotion recognition model does not meet the needs of the user, then the optimized training samples are obtained accordingly to further optimize the training of the emotion recognition model; otherwise, if the feedback information is determined The current performance of the emotion recognition model has met the needs of the user, and no further optimization training is performed on the emotion recognition model for the time being.
  • the terminal device may be directly in the charging state, and/or when its own remaining power is higher than the preset power, and/or, when it is in an idle state.
  • optimized training samples are used to optimize the training of the emotion recognition model.
  • the optimized training of the emotion recognition model requires the power of the terminal device to be consumed, and the process of optimizing training may have a certain impact on other functions of the terminal device, for example, affecting the running speed of the application on the terminal device; in order to ensure that it is not affected
  • the emotion recognition model can be optimized and trained in time.
  • the terminal device can use the optimized training sample to optimize the training of the emotion recognition model when the terminal device is in a charging state; or the terminal device can optimize the training of the emotion recognition model when its remaining power is high.
  • the terminal device can use the optimized training sample to optimize the training of the emotion recognition model when the idle time exceeds the preset time .
  • the idle state here specifically refers to the state of the terminal device when the user is not using the terminal device; or, the terminal device can satisfy itself in the charging state, the remaining power is higher than the preset power, and the idle state duration exceeds the preset time.
  • the preset power can be set according to actual needs, and the value of the preset power is not specifically limited here; the preset time can also be set according to actual needs, and the value of the preset time is not specifically limited here. .
  • the time to optimize the training of the emotion recognition model can also be determined according to other conditions, for example, when the optimized training sample reaches a preset number
  • the emotion recognition model can be optimized for training at the time.
  • an optimized training period can be set, and the emotion recognition model can be optimized for training according to the optimized training period.
  • the method of determining the timing of the optimized training of the emotion recognition model is not limited here.
  • the model training method provided in the above embodiments of the present application will be tailored to different terminal devices, correspondingly based on the touch mode when the user of the terminal device controls the terminal device and the emotional state corresponding to the touch mode, using machine learning algorithms to train Targetedly applicable to the emotion recognition model of the terminal device; in this way, applying the emotion recognition model trained for it on the terminal device can ensure that the emotion recognition model can accurately control the terminal device according to the user of the terminal device The touch mode at the time, to determine the emotional state of the user.
  • an emotion recognition model with better model performance can be trained.
  • this application further provides an emotion recognition method to understand the foregoing emotion recognition model more clearly The role played in practical applications.
  • the following examples will specifically introduce the emotion recognition method provided in the present application.
  • FIG. 3 is a schematic flowchart of an emotion recognition method provided by an embodiment of the application. As shown in Figure 3, the emotion recognition method includes the following steps:
  • Step 301 Obtain the touch mode when the user manipulates the terminal device.
  • This touch mode can also be understood as a touch operation.
  • the touch operation can specifically be a single-step touch operation initiated by the user on the touch screen, such as different strengths.
  • the click operation and the sliding operation under different forces can also be the continuous touch operation initiated by the user on the touch screen, such as the continuous click operation of different frequencies, the continuous sliding operation of different frequencies, etc.
  • touch methods in this application are not specifically limited here.
  • the above touch method is determined based on the touch data obtained by the terminal device, that is, when the user manipulates the terminal device, the terminal device will obtain the touch data generated by the user touching the touch screen, and then , Determine the touch mode based on the acquired touch data.
  • touch data usually includes screen capacitance change data and screen coordinate value change data.
  • the screen capacitance change data can characterize the force when the user clicks or slides the touch screen, and when the user clicks or slides the touch screen and the touch screen The contact area between the screen coordinate value change data is actually determined according to the screen capacitance value change data.
  • the screen coordinate value change data can characterize the click position when the user clicks on the touch screen, and the sliding direction and distance when the user slides the touch screen.
  • the terminal device can determine the user’s current touch mode of the terminal device according to them; for example, according to the change range of the screen capacitance value change data
  • the user’s current touch mode is heavy click or light click.
  • the change frequency of the screen capacitance value change data it can be determined whether the user’s current touch mode is frequent clicks.
  • the sliding track represented by the screen coordinate value change data It can be determined whether the user's current touch mode is a large-range sliding or a small-range sliding.
  • the change frequency of the screen coordinate value change data it can be determined whether the user's current touch mode is frequent sliding.
  • the terminal device can also determine other touch methods according to the touch data. The above touch methods are only examples, and the touch methods are not specifically limited here.
  • the user touching the touch screen will correspondingly generate other touch data.
  • the user touching the resistive screen will correspondingly generate screen resistance value change data and screen coordinate values.
  • the change data the user's current touch mode can be determined accordingly based on these data, and the specific type of touch data is not limited here.
  • Step 302 Use the emotion recognition model to determine the emotional state corresponding to the touch mode as the user's current emotional state; the emotion recognition model is trained by executing the model training method shown in FIG. 2.
  • the terminal device After the terminal device obtains the touch mode, it inputs the obtained touch mode into the emotion recognition model running in the terminal device, uses the emotion recognition model to analyze and process the obtained touch mode, and then outputs the corresponding touch mode The emotional state of the user as the current emotional state of the user.
  • the above emotion recognition model is a model trained by the model training method shown in Figure 2.
  • the model is based on the touch data when the user manipulates the terminal device and the emotion corresponding to the touch data.
  • the emotion recognition model can accurately determine the user's emotional state according to the touch mode when the user controls the terminal device.
  • the emotional state that can be recognized by the emotion recognition model depends on the training sample used when training the emotion recognition model; and the touch mode included in the training sample is the touch mode when the user manipulates the terminal device.
  • the emotional state included in the sample is the emotional state of the user when using the terminal device, that is, the training sample is completely generated based on the touch mode of the user of the terminal device and the emotional state shown by the user.
  • the emotion recognition model trained with the training sample can accurately determine the user’s current emotional state according to the touch mode when the user controls the terminal device, that is, the emotion recognition model trained with the training sample can be sensitive According to the user's usual touch mode, the corresponding emotional state is recognized.
  • the terminal device can provide further personalized services for the user according to the recognized current emotional state of the user, so as to improve the user experience that the terminal device brings to the user.
  • the terminal device can switch the display style of the desktop interface while displaying the desktop interface itself; for example, switch the display theme of the desktop interface, display wallpaper, display font, etc.
  • the emotion recognition model may determine that the emotional state corresponding to the touch mode is irritability; at this time, if the terminal device The displayed interface is a desktop interface, and the terminal device can switch the desktop wallpaper to a brighter and more pleasant picture accordingly, or the terminal device can also change the display theme and/or display font to bring a pleasant look and feel to the user Experience.
  • the terminal device can also change other display styles on the desktop interface according to the user's current emotional state, and there is no restriction on the display styles that can be changed here.
  • the terminal device may recommend related content to the user through the application program when the terminal device itself starts the application program.
  • the music player program can recommend some cheerful Music to relieve the user’s current low mood; or, assuming that the application currently opened on the terminal device is a video playback program, correspondingly, if the emotion recognition model determines that the user’s current emotional state is sad according to the user’s touch
  • the video player program can recommend some funny videos for the user to adjust the user's current sad mood.
  • the terminal device may also recommend relevant text content to the user through other applications according to the user's current emotional state, for example, recommend relevant articles, jokes, etc. for the user.
  • the terminal device can also adopt other methods according to the actual situation to provide a reasonable personalized service according to the user's current emotional state. For example, recommending the user to do the relevant can relieve Emotional operations, etc., do not specifically limit the personalized services that the terminal device can provide.
  • the terminal device uses the emotion recognition model trained for itself to determine the current emotional state of the user according to the touch mode when the user manipulates itself.
  • this method can use the emotion recognition model to specifically determine the user's emotional state according to the touch mode when the user controls the terminal device, and ensure the accuracy of the determined emotional state;
  • the method does not require any additional external devices in the process of determining the emotional state of the user, and it achieves the purpose of improving user experience in a true sense.
  • this application also provides a corresponding model training device, so that the aforementioned model training method can be applied and realized in practice.
  • Fig. 4 is a schematic structural diagram of a model training device provided by an embodiment of the application; as shown in Fig. 4, the model training device includes:
  • the training sample acquisition module 401 is used to acquire the touch mode when the user manipulates the terminal device, mark the emotional state corresponding to the touch mode; use the touch mode and the emotional state corresponding to the touch mode as training samples ;
  • the model training module 402 is configured to use a machine learning algorithm to train a classification model using the training samples to obtain an emotion recognition model; the emotion recognition model uses the touch mode when the user controls the terminal device as input, and the The emotional state corresponding to the touch mode is output.
  • the training sample acquisition module 401 can be specifically used to perform the method in step 201.
  • the model training module 402 can be specifically used to perform step 202
  • the method in please refer to the description of step 202 in the method embodiment shown in FIG. 2 for details, which will not be repeated here.
  • the training sample obtaining module 401 is specifically configured to:
  • the emotional state of the user is determined according to the content of the operation data as the emotional state corresponding to the touch mode.
  • the training sample acquisition module 401 may refer to the description of the related content of determining the emotional state corresponding to the touch mode in the embodiment shown in FIG. 2.
  • the training sample obtaining module 401 is specifically configured to:
  • the emotional state mapping relationship table records the corresponding relationship between the touch mode and the emotional state
  • the training sample acquisition module 401 may refer to the description of the related content of determining the emotional state corresponding to the touch mode in the embodiment shown in FIG. 2.
  • the training sample obtaining module 401 is specifically configured to:
  • the target touch mode and the emotional state corresponding to the target touch mode are used as training samples.
  • the training sample acquisition module 401 may refer to the description of the related content of determining the emotional state corresponding to the touch mode in the embodiment shown in FIG. 2.
  • the touch data includes: screen capacitance value change data and coordinate value change data.
  • the device further includes:
  • the optimized training sample acquisition module is used to acquire the touch mode when the user manipulates the terminal device as an optimized touch mode; mark the emotional state corresponding to the optimized touch mode; compare the optimized touch mode with the optimized touch mode
  • the emotional state corresponding to the touch mode is used as an optimized training sample; the optimized training sample is used to perform optimization training on the emotion recognition model.
  • the optimized training sample acquisition module can refer to the description of the related content of acquiring optimized training samples in the embodiment shown in FIG. 2.
  • the device further includes:
  • the feedback information acquisition module is used to acquire user feedback information for the emotion recognition model; the feedback information is used to characterize whether the performance of the emotion recognition model meets user needs;
  • the first optimization training module is configured to use the optimized training sample to perform optimization training on the emotion recognition model when the feedback information indicates that the performance of the emotion recognition model does not meet the user requirements.
  • the feedback information acquisition module and the first optimization training module may specifically refer to the description of related content for optimization training of the emotion recognition model in the embodiment shown in FIG. 2.
  • the device further includes:
  • the second optimization training module is used for when the terminal device is in a charging state, and/or when the remaining power of the terminal device is higher than a preset power, and/or when the terminal device is in an idle state
  • the optimized training sample is used to optimize the training of the emotion recognition model.
  • the feedback information acquisition module and the first optimization training module may specifically refer to the description of related content for optimization training of the emotion recognition model in the embodiment shown in FIG. 2.
  • the model training apparatus provided in the above embodiments of the present application will target different terminal devices, correspondingly based on the touch mode when the user of the terminal device controls the terminal device and the emotional state corresponding to the touch mode, using machine learning algorithms to train Targetedly applicable to the emotion recognition model of the terminal device; in this way, applying the emotion recognition model trained for it on the terminal device can ensure that the emotion recognition model can accurately control the terminal device according to the user of the terminal device The touch mode at the time, to determine the emotional state of the user.
  • the present application also provides a corresponding emotion recognition device, so that the above emotion recognition method can be applied and realized in practice.
  • FIG. 5 is a schematic structural diagram of an emotion recognition device provided by an embodiment of the application; as shown in FIG. 5, the emotion recognition device includes:
  • the touch mode acquisition module 501 is used to obtain the touch mode when the user manipulates the terminal device;
  • the emotional state recognition module 502 is configured to use the emotional recognition model to determine the emotional state corresponding to the touch mode as the current emotional state of the user; the emotional recognition model is trained by executing the model training method described in FIG. 2.
  • the touch mode acquisition module 501 can be specifically used to execute the method in step 301.
  • the emotional state recognition module 502 can be specifically used to execute
  • the method in step 302 please refer to the description of step 302 in the method embodiment shown in FIG. 3 for details, which will not be repeated here.
  • the device further includes:
  • the display style switching module is configured to switch the display style of the desktop interface according to the current emotional state of the user when the terminal device displays the desktop interface.
  • the display style switching module may refer to the description of related content about switching the display style of the desktop interface in the embodiment shown in FIG. 3.
  • the device further includes:
  • the content recommendation module is configured to recommend related content through the application according to the current emotional state of the user when the terminal device starts the application.
  • the content recommendation module may refer to the description of recommending related content through the application program in the embodiment shown in FIG. 3 for details.
  • the terminal device uses the emotion recognition model trained for itself to determine the current emotional state of the user according to the touch mode when the user manipulates itself.
  • the device can use the emotion recognition model to specifically determine the user's emotional state according to the touch mode when the user manipulates the terminal device to ensure the accuracy of the determined emotional state; and the device is in the process of determining the user's emotional state , Without any additional external equipment, in a real sense the purpose of improving user experience is achieved.
  • FIG. 6 is a schematic diagram of the structure of a server for training models provided by an embodiment of this application.
  • the server 600 can be compared due to different configurations or performance. Large differences may include one or more central processing units (CPU) 622 (for example, one or more processors) and memory 632, and one or more storage media 630 for storing application programs 642 or data 644 (For example, one or one storage device in Shanghai). Among them, the memory 632 and the storage medium 630 may be short-term storage or persistent storage.
  • the program stored in the storage medium 630 may include one or more modules (not shown in the figure), and each module may include a series of command operations on the server.
  • the central processing unit 622 may be configured to communicate with the storage medium 630, and execute a series of instruction operations in the storage medium 630 on the server 600.
  • the server 600 may also include one or more power supplies 626, one or more wired or wireless network interfaces 650, one or more input and output interfaces 658, and/or one or more operating systems 641, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
  • operating systems 641 such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
  • the steps performed by the server in the foregoing embodiment may be based on the server structure shown in FIG. 6.
  • the CPU 622 is used to execute the following steps:
  • a machine learning algorithm is used to train the classification model using the training samples to obtain an emotion recognition model; the emotion recognition model uses the touch mode when the user controls the terminal device as input, and the emotional state corresponding to the touch mode For output.
  • the CPU 622 may also execute method steps of any specific implementation manner of the model training method in the embodiment of the present application.
  • the server needs to communicate with the terminal device to obtain training samples from the terminal device. It should be understood that training samples from different terminal devices should be configured accordingly The identification of the corresponding terminal device so that the CPU 622 of the server can use the training samples from the same terminal device to train the emotion recognition model applicable to the terminal device using the model training method provided in the embodiment of the application.
  • the embodiment of the application also provides another electronic device for training models and recognizing emotions (the electronic device may be the terminal device described above), which is used to execute the model training method provided by the embodiments of the application, and the training is applicable Based on its own emotion recognition model; and/or, execute the emotion recognition method provided in the embodiments of the present application, and use the trained emotion recognition model to recognize the user's current emotional state according to the user's own touch control method.
  • the electronic device may be the terminal device described above
  • FIG. 7 shows a schematic structural diagram of the above-mentioned electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include pressure sensor 180A, gyroscope sensor 180B, air pressure sensor 180C, magnetic sensor 180D, acceleration sensor 180E, distance sensor 180F, proximity light sensor 180G, fingerprint sensor 180H, temperature sensor 180J, touch sensor 180K, ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU), etc.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that the processor 110 has just used or used cyclically. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter (universal asynchronous transmitter) interface.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the I2C interface is a two-way synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement the touch function of the electronic device 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to realize communication between the processor 110 and the audio module 170.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through an I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communication to sample, quantize and encode analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with the display screen 194, the camera 193 and other peripheral devices.
  • the MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device 100.
  • the processor 110 and the display screen 194 communicate through a DSI interface to realize the display function of the electronic device 100.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and so on.
  • GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transfer data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through the headphones. This interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is merely a schematic description, and does not constitute a structural limitation of the electronic device 100.
  • the electronic device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive the wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110.
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering and amplifying the received electromagnetic waves, and then transmitting them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic wave radiation via the antenna 2.
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connected to the display 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the electronic device 100 can implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back by the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats.
  • the electronic device 100 may include one or N cameras 193, and N is a positive integer greater than one.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects the frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in a variety of encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can realize applications such as intelligent cognition of the electronic device 100, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, at least one application program (such as a sound playback function, an image playback function, etc.) required by at least one function.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), etc.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the speaker 170A also called a “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can approach the microphone 170C through the mouth to make a sound, and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the earphone interface 170D is used to connect wired earphones.
  • the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the capacitive pressure sensor may include at least two parallel plates with conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch location but have different touch operation strengths may correspond to different operation instructions. For example: when a touch operation whose intensity of the touch operation is less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 100.
  • the angular velocity of the electronic device 100 around three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shake of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can use the magnetic sensor 180D to detect the opening and closing of the flip holster.
  • the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and used in applications such as horizontal and vertical screen switching, pedometers and so on.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light to the outside through the light emitting diode.
  • the electronic device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 can determine that there is no object near the electronic device 100.
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, etc.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the electronic device 100 executes to reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 due to low temperature.
  • the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch device”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can transmit the detected touch operation to the application processor to determine the touch mode.
  • the display screen 194 can provide visual output related to the touch operation.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can obtain the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the human pulse and receive the blood pressure pulse signal.
  • the bone conduction sensor 180M may also be provided in the earphone, combined with the bone conduction earphone.
  • the audio module 170 can parse the voice signal based on the vibration signal of the vibrating bone block of the voice obtained by the bone conduction sensor 180M, and realize the voice function.
  • the application processor may analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the heart rate detection function.
  • the button 190 includes a power button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the electronic device 100 may receive key input, and generate key signal input related to user settings and function control of the electronic device 100.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • touch operations applied to different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminding, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to the SIM card.
  • the SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the electronic device 100.
  • the electronic device 100 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards can be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 may also be compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 100 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present invention takes an Android system with a layered architecture as an example to exemplify the software structure of the electronic device 100.
  • FIG. 8 is a software structure block diagram of an electronic device 100 according to an embodiment of the present invention.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides application programming interfaces (application programming interface, API) and programming frameworks for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer can include a window manager, a content provider, a view system, a phone manager, a resource manager, and a notification manager.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • the data may include video, image, audio, phone calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls that display text and controls that display pictures.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device 100. For example, the management of the call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, etc.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can disappear automatically after a short stay without user interaction.
  • the notification manager is used to notify the download completion, message reminder, etc.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function functions that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to realize 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into original input events (including touch coordinates, time stamps of touch operations, etc.).
  • the original input events are stored in the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and the control corresponding to the click operation is the control of the camera application icon as an example, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer.
  • the camera 193 captures still images or videos.
  • the embodiment of the present application also provides a computer-readable storage medium for storing program code, the program code is used to execute any one of the model training methods described in the foregoing embodiments, and/or the emotion recognition method Any one of the implementations.
  • the embodiments of the present application also provide a computer program product including instructions, which when run on a computer, cause the computer to execute any one of the implementations of the model training methods described in the foregoing embodiments, and/or the emotion recognition method Any one of the implementations.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer device (which can be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (English full name: Read-Only Memory, English abbreviation: ROM), random access memory (English full name: Random Access Memory, English abbreviation: RAM), magnetic Various media that can store program codes, such as discs or optical discs.

Abstract

本申请实施例公开了一种模型训练方法以及情绪识别方法,本申请基于用户操控终端设备时的触控方式以及触控方式对应的情绪状态,采用机器学习技术对分类模型进行训练,得到情绪识别模型;进而,在实际应用中,可以利用该情绪识别模型根据用户操控该终端设备时的触控方式,相应地确定用户当前的情绪状态。如此,利用用户操控某个终端设备时的触控方式及其对应的情绪状态,对分类模型进行训练,得到适用于该终端设备的情绪识别模型;相应地,该终端设备应用该情绪识别模型识别用户的情绪状态时,该情绪识别模型能够有针对性地根据用户操控该终端设备时的触控方式,准确地识别用户的情绪状态。

Description

模型训练方法、情绪识别方法及相关装置和设备
本申请要求在2019年4月17日提交中国国家知识产权局、申请号为201910309245.5、发明名称为“模型训练方法、情绪识别方法及相关装置和设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,具体涉及模型训练方法、情绪识别方法及相关装置和设备。
背景技术
如今,终端设备如智能手机、平板电脑等在人们的日常生活中扮演的角色越来越重要,终端设备带来的用户体验成为使用者衡量终端设备的一个关键因素,如何为用户提供个性化服务、提高用户体验,已成为各终端设备厂商关注的研发重点。
目前,已研发出一些终端设备能够从识别用户情绪状态的角度出发,根据识别出的用户情绪状态相应地为用户提供个性化服务;而能否为用户提供合理的个性化服务,主要取决于用户情绪状态识别的准确度。目前较为常见的基于面部表情识别情绪的方法,随着环境光线、用户面部与终端设备之间的相对位置等因素的改变,面部表情识别的准确度也会发生改变,即该方法无法保证准确识别用户的面部表情,进而将导致基于面部表情识别出情绪状态并不准确。
此外,现有技术中还存在一种基于人体生理信号确定用户情绪状态的方法,该方法借助额外的测量设备测量用户的人体生理信号,如心率、体温、血压等,进而通过对这些人体生理信号进行分析处理,确定用户的情绪状态;这种方法在实现的过程中需要借助额外的外接设备,而外接设备的使用对于用户来说比较累赘,总体来看,该方法并未真正达到提高用户体验的目的。
发明内容
本申请实施例提供了一种模型训练方法、情绪识别方法及相关装置和设备,能够基于训练出的情绪识别模型准确地识别用户的情绪状态,从而便于终端设备根据所识别出的情绪状态,为用户提供更合理的个性化服务。
有鉴于此,本申请第一方面提供了一种模型训练方法,该方法可以应用于终端设备和服务器,在该方法中,获取用户操控终端设备时的触控方式,并标记触控方式对应的情绪状态,将该触控方式以及触控方式对应的情绪状态作为训练样本;进而,采用机器学习技术(machine learning technology,MLT),利用上述训练样本对预置的分类模型进行训练,得到适用于该终端设备的情绪识别模型,该情绪识别模型能够根据用户操控该终端设备时的触控方式,相应地确定该触控方式对应的情绪状态。
上述模型训练方法可以针对不同的终端设备,相应地基于该终端设备的使用者操控该终端设备时的触控方式以及触控方式对应的情绪状态,采用机器学习算法训练出有针对性地适用于该终端设备的情绪识别模型;如此,在终端设备上应用针对其训练得到的情绪识别模型,可以保证该情绪识别模型能够准确地根据该终端设备的使用者操控该终端设备时的触控方式,确定使用者的情绪状态。
在本申请实施例第一方面的第一种实现方式中,针对某触控方式确定其对应的情绪状态时,可以先根据该触控方式对应的触发时间,确定参考时间区间,然后,获取该参考时间区间内用户操作终端设备所生成的操作数据内容,如用户输入终端设备的文字内容、语音内容等,进而,通过分析在参考时间区间内获取的操作数据内容,确定该操作数据内容对应的情绪状态,作为该触控方式对应的情绪状态。
如此,利用用户操控终端设备时产生的操作数据内容,确定触控方式对应的情绪状态,能够保证所确定的情绪状态合理准确,进而保证所确定的触控方式与情绪状态之间的对应关系合理准确。
在本申请实施例第一方面的第二种实现方式中,针对某触控方式确定其对应的情绪状态时,可以调用预置的情绪状态映射关系表,该情绪状态映射关系表中记录有触控方式与情绪状态之间的对应关系;进而,在该情绪状态映射关系表中查找触控方式对应的情绪状态。
目前已有相关实验针对用户触控终端设备时的触控方式与用户的情绪状态之间的对应关系进行了研究调研,并生成了一些能够反映这种对应关系的调研结果,根据这些调研结果相应地生成情绪状态映射表,并基于该情绪状态映射表确定触控方式对应的情绪状态,能够有效地保证针对触控方式确定的情绪状态客观合理。
在本申请实施例第一方面的第三种实现方式中,获取训练样本时,可以先在预设时间段内采集用户操控终端设备生成的触控数据,然后对这些触控数据做聚类处理生成触控数据集合,并确定触控数据集合所对应的触控方式,接着,将包括触控数据最多的触控数据集合作为目标触控数据集合,将该目标触控数据集合对应的触控方式作为目标触控方式,进而标记该目标触控方式对应的情绪状态,将目标触控方式以及其对应的情绪状态作为训练样本。
通常情况下,用户在一段时间内会采用多种不同的触控方式操控终端设备,而在该段时间内,用户的情绪状态可能不会发生太大的变化,为此,需要通过该种实现方式从用户在该段时间内采用的多种触控方式中,选择出最能够表征用户当前的情绪状态触控方式即上述目标触控方式,进而,利用该目标触控方式以及用户当前的情绪状态作为训练样本,如此能够有效地保证触控方式与情绪状态之间的对应关系准确合理。
在本申请实施例第一方面的第四种实现方式中,上述第三种实现方式中所提及的触控数据包括:屏幕电容值变化数据以及坐标值变化数据。由于目前多数触屏设备所采用的触摸屏为电容屏,因此,将屏幕电容值变化数据以及坐标值变化数据作为触控数据,可以保证本申请实施例提供的方法在日常工作生活中可以被广泛地应用。
在本申请实施例第一方面的第五种实现方式中,在训练得到情绪识别模型后,还可以进一步获取用户后续操控该终端设备时的触控方式,作为优化触控方式,并标记该优化触 控方式对应的情绪状态,将优化触控方式以及其对应的情绪状态作为优化训练样本;以便后续利用该优化训练样本对情绪识别模型进行优化训练。
随着使用时间的增加,用户触控终端设备时采用的触控方式也可能会发生变化,为了保证情绪识别模型始终能够准确地根据用户的触控方式识别用户的情绪状态,在训练得到情绪识别模型后,还可以不断地采集用户操控终端设备时的触控方式及其对应的情绪状态,作为优化训练样本,进而,在情绪识别模型无法准确识别用户的情绪状态时,可以利用该优化训练样本对情绪识别模型做进一步优化训练,以保证其始终具备较好的模型性能。
在本申请实施例第一方面的第六种实现方式中,可以获取用户针对情绪识别模型的反馈信息,并在该反馈信息表征情绪识别模型的性能不满足用户需求时,利用在第五种实现方式中获取的优化训练样本对情绪识别模型进行优化训练。
由于本申请中的情绪识别模型所面向的对象为终端设备的用户,因此,用户的使用体验可以作为衡量该情绪识别模型性能最为重要的标准之一,在用户反馈该情绪识别模型的性能已无法满足自身需求时,即在用户认为该情绪识别模型所识别出的情绪状态不够准确时,即可利用在第五种实现方式中获取的优化训练样本对该情绪识别模型进行优化训练,以使其满足用户的使用需求,提高用户使用体验。
在本申请实施例第一方面的第七种实现方式中,终端设备可以在满足处于充电状态、剩余电量高于预设电量以及处于空闲状态的时长超过预设时长这三项条件中任意一项或多项的情况下,采用在上述第五种实现方式中获取的优化训练样本对情绪识别模型进行优化训练。
由于对情绪识别模型进行优化训练时通常需要耗费大量电量,并且可能会对终端设备的其他功能造成一定的影响,为了保证不影响用户正常使用终端设备,终端设备可以在满足上述三项条件中任意一项或多项条件时,对情绪识别模型进行优化训练,保障用户使用体验。
本申请第二方面提供了一种情绪识别方法,该方法通常应用于终端设备,在该方法中,终端设备获取用户操控自身时的触控方式,并利用自身运行的情绪识别模型确定该触控方式对应的情绪状态,作为用户当前的情绪状态,该情绪识别模型是采用上述第一方面提供的模型训练方法,针对该终端设备训练得到的。
该情绪识别方法利用情绪识别模型,有针对性地根据用户操控终端设备时的触控方式确定用户的情绪状态,能够保证所确定的情绪状态的准确性;并且,该方法在确定用户情绪状态的过程中,无需任何额外的外接设备,真正意义上实现了提高用户体验的目的。
在本申请实施例第二方面的第一种实现方式中,终端设备可以在其自身显示桌面界面的情况下,根据情绪识别模型所识别出的用户当前的情绪状态,切换桌面界面的显示样式。如此,终端设备通过改变其桌面界面的显示样式,直接地改变用户的视觉体验,从视觉观感上调节或配合用户的情绪状态,从而提高用户的使用体验。
在本申请实施例第二方面的第二种实现方式中,终端设备可以在其自身开启应用程序的情况下,根据情绪识别模型所识别出的用户当前的情绪状态,通过应用程序推荐相关内容,例如,推荐相关音乐内容、视频内容、文字内容等等。如此,结合用户的情绪状态, 通过相应的应用程序为用户推荐相关内容,从多个角度对用户情绪状态进行实时地调节,提高用户使用体验。
本申请第三方面提供了一种模型训练装置,所述装置包括:
训练样本获取模块,用于获取用户操控终端设备时的触控方式,标记所述触控方式对应的情绪状态;将所述触控方式以及所述触控方式对应的情绪状态,作为训练样本;
模型训练模块,用于采用机器学习算法,利用所述训练样本对分类模型进行训练,得到情绪识别模型;所述情绪识别模型以用户操控所述终端设备时的触控方式为输入,以该触控方式对应的情绪状态为输出。
在本申请实施例第三方面的第一种实现方式中,所述训练样本获取模块具体用于:
根据所述触控方式对应的触发时间,确定参考时间区间;
获取所述参考时间区间内用户操作所述终端设备生成的操作数据内容;
根据所述操作数据内容确定用户的情绪状态,作为所述触控方式对应的情绪状态。
在本申请实施例第三方面的第二种实现方式中,所述训练样本获取模块具体用于:
调用预置的情绪状态映射关系表;所述情绪状态映射关系表中记录有触控方式与情绪状态之间的对应关系;
查找所述情绪状态映射关系表,确定所述触控方式对应的情绪状态。
在本申请实施例第三方面的第三种实现方式中,所述训练样本获取模块具体用于:
在预设时间段内,采集用户操控所述终端设备产生的触控数据;
对所述触控数据做聚类处理生成触控数据集合,确定所述触控数据集合对应的触控方式;
将包括触控数据最多的触控数据集合作为目标触控数据集合,将所述目标触控数据集合对应的触控方式作为目标触控方式;标记所述目标触控方式对应的情绪状态;
将所述目标触控方式以及所述目标触控方式对应的情绪状态,作为训练样本。
在本申请实施例第三方面的第四种实现方式中,所述触控数据包括:屏幕电容值变化数据及坐标值变化数据。
在本申请实施例第三方面的第五种实现方式中,所述装置还包括:
优化训练样本获取模块,用于获取用户操控所述终端设备时的触控方式,作为优化触控方式;标记所述优化触控方式对应的情绪状态;将所述优化触控方式和所述优化触控方式对应的情绪状态,作为优化训练样本;所述优化训练样本用于对所述情绪识别模型进行优化训练。
在本申请实施例第三方面的第六种实现方式中,所述装置还包括:
反馈信息获取模块,用于获取用户针对所述情绪识别模型的反馈信息;所述反馈信息用于表征所述情绪识别模型的性能是否满足用户需求;
第一优化训练模块,用于在所述反馈信息表征所述情绪识别模型的性能不满足用户需求时,利用所述优化训练样本对所述情绪识别模型进行优化训练。
在本申请实施例第三方面的第七种实现方式中,所述装置还包括:
第二优化训练模块,用于在所述终端设备处于充电状态时,和/或,在所述终端设备的剩余电量高于预设电量时,和/或,在所述终端设备处于空闲状态的时长超过预设时长时, 利用所述优化训练样本对所述情绪识别模型进行优化训练。
本申请第四方面提供了一种情绪识别装置,所述装置包括:
触控方式获取模块,用于获取用户操控终端设备时的触控方式;
情绪状态识别模块,用于利用情绪识别模型确定所述触控方式对应的情绪状态,作为用户当前的情绪状态;所述情绪识别模型是执行第一方面所述的模型训练方法训练得到的。
在本申请实施例第四方面的第一种实现方式中,所述装置还包括:
显示样式切换模块,用于在所述终端设备显示桌面界面的情况下,根据所述用户当前的情绪状态,切换桌面界面的显示样式。
在本申请实施例第四方面的第二种实现方式中,所述装置还包括:
内容推荐模块,用于在所述终端设备开启应用程序的情况下,根据所述用户当前的情绪状态,通过所述应用程序推荐相关内容。
本申请第五方面提供了一种服务器,所述服务器包括处理器以及存储器:
所述存储器用于存储程序代码,并将所述程序代码传输给所述处理器;
所述处理器用于根据所述程序代码中的指令执行上述第一方面所述的模型训练方法。
本申请第六方面提供了一种终端设备,所述终端设备包括处理器以及存储器:
所述存储器用于存储程序代码,并将所述程序代码传输给所述处理器;
所述处理器用于根据所述程序代码中的指令执行第一方面所述的模型训练方法,和/或,执行第二方面所述的情绪识别方法。
本申请第七方面提供了一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行第一方面所述的模型训练方法,和/或执行第二方面所述的情绪识别方法。
附图说明
图1为本申请实施例提供的模型训练方法和情绪识别方法的应用场景示意图;
图2为本申请实施例提供的一种模型训练方法的流程示意图;
图3为本申请实施例提供的一种情绪识别方法的流程示意图;
图4为本申请实施例提供的一种模型训练装置的结构示意图;
图5为本申请实施例提供的一种情绪识别装置的结构示意图;
图6为本申请实施例提供的一种服务器的结构示意图;
图7为本申请实施例提供的一种电子设备的结构示意图;
图8为本申请实施例提供的一种电子设备的软件结构框图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四” 等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
为了进一步提高终端设备为使用者带来的用户体验,为用户提供更贴心、更个性化的服务,目前已有一些终端设备厂商从识别用户情绪状态的角度出发,开发终端设备识别用户情绪状态的功能,目前较为常见的应用于终端设备识别情绪状态的方法包括以下三种:
面部表情识别法,利用终端设备上的摄像装置采集用户的面部表情,进而通过对用户的面部表情进行分析处理,确定用户的情绪状态;由于在不同的场景下光线不同,以及用户面部与终端设备之间的相对位置不稳定,该方法无法保证在各种情况下均准确识别用户面部表情,相应地,在用户面部表情识别准确度较低的情况下,也无法保证所识别的用户情绪状态的准确度。
语音识别法,利用终端设备采集用户输入的语音内容,通过对语音内容进行分析处理,确定用户的情绪状态;这种方法需要用户主动向终端设备语音输入表达情绪状态的语音内容,进而终端设备才能够相应地确定用户的情绪状态;而在多数情况下,用户并不会主动告知终端设备自身的情绪状态,可见,这种方法在实际应用中的应用价值较低。
人体生理信号识别法,终端设备通过额外的测量装置或传感器采集人体生理信号,如心率、体温、血压等,进而,终端设备对所采集的人体生理信号进行分析处理,确定用户的情绪状态;这种方法在实现的过程中需要借助额外的外接设备,而外接设备的使用对于用户来说通常较为累赘,即会从另一个方面为用户带来不好的用户体验,并未真正达到提高用户体验的目的。
为了使终端设备能够准确地识别用户的情绪状态,保证从真正意义上提高终端设备所带来的用户体验,本申请另辟蹊径,基于当前终端设备的屏占比越来越高,且用户频繁地通过触摸屏(touch pad,TP)与终端设备进行交互这一现象,利用用户在相同情绪状态下操控终端设备的触控方式具有相似的规律这一特点,训练出一种能够基于用户操控终端设备时的触控方式,确定用户当前情绪状态的模型,进而使得终端设备利用该模型识别用户的情绪状态,为用户提供合理的个性化服务。
具体的,在本申请实施例提供的模型训练方法中,终端设备获取用户操控终端设备时的触控方式,并标记触控方式对应的情绪状态,将触控方式及其对应的情绪状态作为训练样本;进而采用机器学习算法,利用上述训练样本对分类模型进行训练得到情绪识别模型。相应地,在本申请实施例提供的情绪识别方法中,终端设备获取用户操控该终端设备时的触控方式,进而利用通过上述模型训练方法训练得到的情绪识别模型,确定所获取的触控方式对应的情绪状态,将该情绪状态作为用户当前的情绪状态。
需要说明的是,本申请实施例提供的模型训练方法会针对不同的终端设备,相应地基于该终端设备的使用者操控该终端设备时的触控方式以及触控方式对应的情绪状态,采用机器学习算法训练出有针对性地适用于该终端设备的情绪识别模型;如此,在终端设备上 应用针对其训练得到的情绪识别模型,可以保证该情绪识别模型能够准确地根据该终端设备的使用者操控该终端设备时的触控方式,确定使用者的情绪状态。
相比现有技术中常用的几种情绪识别方法,本申请实施例提供的方法能够利用情绪识别模型,有针对性地根据用户操控终端设备时的触控方式,确定用户的情绪状态,保证所确定的情绪状态的准确性;并且,本申请实施例提供的方法在确定用户情绪状态的过程中,无需任何额外的外接设备,真正意义上实现了提高用户体验的目的。
应理解,本申请实施例提供的模型训练方法以及情绪识别方法可以应用于配置有触摸屏的终端设备(也可以称为电子设备)以及服务器;其中,终端设备具体可以为智能手机、平板电脑、计算机、个人数字助理(Personal Digital Assitant,PDA)等;服务器具体可以为应用服务器,也可以为Web服务器。
为了便于理解本申请实施例提供的技术方案,下面以终端设备作为执行主体为例,对本申请实施例提供的模型训练方法及情绪识别方法的应用场景进行介绍。
参见图1,图1为本申请实施例提供的模型训练方法和情绪识别方法的应用场景示意图。如图1所示,该应用场景中包括终端设备101,该终端设备101既用于执行本申请实施例提供的模型训练方法训练情绪识别模型,又用于执行本申请实施例提供的情绪识别方法对用户的情绪状态进行识别。
在模型训练阶段,终端设备101获取用户操控自身时的触控方式,该触控方式具体可以包括:不同力度和/或不同频率的点击操作、滑动操作等;标记所获取的触控方式对应的情绪状态;进而,将所获取的触控方式以及其对应的情绪状态作为训练样本。终端设备101获取到训练样本后,采用机器学习算法,利用所获取的训练样本对终端设备101内预先构建的分类模型进行训练,从而得到情绪识别模型。
在模型应用阶段,终端设备101执行本申请实施例提供的情绪识别方法,利用在模型训练阶段训练得到的情绪识别模型,识别用户的情绪状态;具体的,终端设备101获取用户操控自身时的触控方式,利用情绪识别模型确定所获取的触控方式对应的情绪状态,作为用户当前的情绪状态。
需要说明的是,在模型训练阶段,终端设备101是基于自身的使用者操控自身时的触控方式以及其对应的情绪状态,训练得到的情绪识别模型,该情绪识别模型是针对性适用于终端设备101的;相应地,在模型应用阶段,终端设备101利用在模型训练阶段训练得到的情绪识别模型,根据用户操控自身时的触控方式确定用户当前的情绪状态,能够有效地保证所确定出的情绪状态的准确性。
应理解,上述图1所示的应用场景仅为一种示例,在实际应用中,本申请实施例提供的模型训练方法和情绪识别方法还可以应用于其他应用场景,在此不对本申请实施例提供的模型训练方法和情绪识别方法的应用场景做具体限定。
下面先通过实施例对本申请提供的模型训练方法进行介绍。
参见图2,图2为本申请实施例提供的一种模型训练方法的流程示意图。如图2所示,该模型训练方法包括以下步骤:
步骤201:获取用户操控终端设备时的触控方式,标记所述触控方式对应的情绪状 态;将所述触控方式以及所述触控方式对应的情绪状态,作为训练样本。
终端设备在用户操控自身时,获取用户触控触摸屏时的触控方式,该触控方式也可以理解为触控操作,触控操作具体可以为用户针对触摸屏发起的单步触控操作,如不同力度下的点击操作、不同力度下的滑动操作等,也可以为用户针对触摸屏发起的连续触控操作,如不同频率的连续点击操作、不同频率的连续滑动操作等,当然,用户触控触摸屏时所采用的其他触控操作也可视为本申请中的触控方式,在此不对本申请中的触控方式做具体限定。
进而,终端设备标记所获取的触控方式对应的情绪状态,该情绪状态即为用户发起该触控方式时的情绪状态;将该触控方式和触控方式对应的情绪状态作为训练样本。
应理解,为了保证基于训练样本训练得到的情绪识别模型具备较好的模型性能,通常需要获取大量的训练样本;当然,为了减少终端设备的数据处理量,也可以根据实际需求减少所获取的训练样本的数量,在此不对所获取的训练样本的数量做具体限定。
需要说明的是,触控方式通常需要基于用户触控触摸屏时产生的触控数据来确定;对于电容屏来说,触控数据通常包括屏幕电容值变化数据和屏幕坐标值变化数据,其中,屏幕电容值变化数据能够表征用户点击或滑动触摸屏时的力度,以及用户点击或滑动触摸屏时与触摸屏之间的接触面积,用户点击或滑动触摸屏的力度越大,屏幕电容值的变化幅度越大,用户点击或滑动触摸屏时与触摸屏的接触面积越大,发生变化的屏幕电容值越多;屏幕坐标值变化数据实际上也是根据屏幕电容值变化数据确定的,屏幕坐标值变化数据能够表征用户点击触摸屏时的点击位置,以及用户滑动触摸屏时的滑动方向和滑动距离;在用户触控终端设备的触摸屏时,终端设备的底层驱动会通过输入(input)子系统向终端设备的处理器上报屏幕电容值变化数据及其对应的位置坐标,通过记录连续变化的位置坐标即可确定滑动方向和滑动距离。
应理解,对于其他类型的触摸屏来说,用户触控触摸屏将相应地产生其他触控数据,例如,对于电阻屏来说,用户触控电阻屏会相应地产生屏幕电阻值变化数据和屏幕坐标值变化数据,这些数据均能相应地反映用户当前的触控方式,在此不对触控数据的具体类型做任何限定。
具体基于触控数据确定触控方式、构建训练样本时,终端设备可以在预设时间段内采集用户操控终端设备产生的触控数据;对所采集的触控数据做聚类处理生成触控数据集合,并确定触控数据集合对应的触控方式;将包括触控数据最多的触控数据集合作为目标触控数据集合,将该目标触控数据集合对应的触控方式作为目标触控方式,进而标记该目标触控方式对应的情绪状态;最终,将目标触控方式以及目标触控方式对应的情绪状态作为训练样本。
具体的,在预设时间段内用户通常会多次操控终端设备,相应地,终端设备可以采集到多个触控数据;将具有相似特征的触控数据聚类起来,例如,可以将变化幅度相似的屏幕电容值变化数据聚类到一起,将对应的点击位置相似的屏幕电容值变化数据聚类到一起,将所表征的滑动轨迹相似的屏幕坐标值变化数据聚类到一起,等等,由此得到若干个触控数据集合;进而,根据各个触控集合中触控数据的类型,相应地标记每个触控集合对应的触控方式,例如,对于由变化幅度均超过预设幅度阈值的触控数据组成的触控数据集 合,可以标记其对应的触控方式为重度点击,对于由变化频率均超过预设频率阈值的触控数据组成的触控数据集合,可以标记其对应的触控方式为频繁点击,对于由变化频率均超过预设频率阈值的屏幕坐标值变化数据组成的触控数据集合,可以标记其对应的触控方式为频繁滑动,等等。
进而,确定包括有最多的触控数据的触控数据集合为目标触控数据集合,并相应地将该目标触控数据集合对应的触控方式作为目标触控方式;根据预设时间段内终端设备采集的能够表征用户的情绪状态的操作数据内容,和/或,根据情绪状态映射表中记录的触控方式与情绪状态之间的对应关系,确定目标触控方式对应的情绪状态;最终,将目标触控方式及其对应的情绪状态,作为训练样本。
应理解,在采集训练样本的过程中,通常可以通过上述方式采集到很多的目标触控方式以及对应的情绪状态,相应地,在训练情绪识别模型时,可以基于所采集的目标触控方式的类别确定情绪识别模型所能识别的触控方式,基于各目标触控方式对应的情绪状态确定情绪识别模型所能确定的情绪状态。
针对标记触控方式对应的情绪状态的实现方法,本申请提供了以下两种实现方法:
第一种方法,终端设备根据触控方式对应的触发时间,确定参考时间区间;获取该参考时间区间内用户操作终端设备生成的操作数据内容;进而,根据该操作数据内容确定用户的情绪状态,作为该触控方式对应的情绪状态。
具体的,终端设备可以确定触控方式对应的触发时间,将该触发时间作为中心点,按照预设的参考时间区间长度确定参考时间区间;此外,终端设备也可以将触控方式对应的触发时间作为起始点或终止点,按照预设的参考时间区间长度确定参考时间区间,当然,终端设备还可以采用其他方式,根据触控方式对应的触发时间确定参考时间区间,在此不对确定参考时间区间的方式做任何限定。
应理解,上述参考时间区间长度可以根据实际需求设定,在此不对该参考时间区间长度做具体限定。
确定出参考时间区间后,终端设备获取用户在该参考时间区间内操控终端设备产生的操作数据内容,该操作数据内容是用户操控该终端设备产生的相关数据内容,该操作数据内容具体可以为用户在参考时间区间内输入该终端设备的文字内容,也可以为用户在参考时间区间内输入该终端设备的语音内容,还可以为用户通过终端设备上的应用程序产生的其他操作数据内容,在此不对该操作数据内容的类型做任何限定。
获取到操作数据内容后,终端设备可以相应地通过对该操作数据内容进行分析处理,确定该操作数据内容对应的情绪状态;例如,当操作数据内容为用户输入的文字内容时,终端设备可以通过对该文字内容进行语义分析确定其的情绪状态;当操作数据内容为用户输入的语音内容时,终端设备可以通过对该语音内容进行语音识别分析确定其对应的情绪状态;当操作数据内容为其他形式的数据内容时,终端设备也可以相应地采用其他方式确定其对应的情绪状态,在此也不对确定操作数据内容对应的情绪状态的方式做任何限定。最终,将操作数据内容对应的情绪状态作为触控方式对应的情绪状态。。
应理解,当通过对预设时间段内的触控数据进行聚类处理确定目标触控方式时,可以直接将该预设时间段作为参考时间区间,进而根据该预设时间段内用户操控终端设备产生 的操作数据内容,确定操作数据内容对应的情绪状态,作为目标触控方式对应的情绪状态。
需要说明的是,终端设备在获取操作数据内容之前,需要获得用户的许可权限,只有在用户允许终端设备获取操作数据内容的情况下,终端设备才可获取用户操控终端设备产生的操作数据内容,并基于所获取的操作数据内容为触控方式标记对应的情绪状态;并且,终端设备在获取到操作数据内容后,还需要加密存储所获取的操作数据内容,以保障用户的数据隐私安全。
第二种方法,终端设备调用预置的情绪状态映射关系表;该情绪状态映射关系表中记录有触控方式与情绪状态之间的对应关系;进而,在该情绪状态映射关系表中查找触控方式对应的情绪状态。
目前已有相关研究调研发现,用户触控终端设备时的触控方式与用户的情绪状态之间存在一定的映射关系,并且已经生成了一些能够反映这种映射关系的调研结果,通过整理这些已有的调研结果相应地生成情绪状态映射关系表,利用该情绪状态映射关系表记录各种触控方式与情绪状态之间的对应关系。
在获取到用户操控终端设备时的触控方式后,终端设备可以调用自身预置的该情绪状态映射关系表,进而,在该情绪状态映射关系表中查找所获取的触控方式对应的情绪状态。
应理解,当通过对预设时间段内的触控数据进行聚类处理确定目标触控方式时,可以在该情绪状态映射关系表中查找目标触控方式对应的情绪状态。
需要说明的是,采用上述第一种方法,根据用户操作终端设备产生的操作数据内容,为触控方式标记出其对应的情绪状态后,还可以进一步利用如此确定的触控方式与情绪状态之间的对应关系,对上述情绪状态映射表进行优化更新处理,以不断地丰富情绪状态映射表中记录的映射关系。
需要说明的是,在实际应用中,可以单独采用上述第一种方法或第二种方法标记触控方式对应的情绪状态,也可以将上述第一种方法与第二种方法结合起来标记触控方式对应的情绪状态,即,可以在采用第一种方法无法准确地确定触控方式对应的情绪状态时,采用第二种方法确定触控方式对应的情绪状态,也可以在采用第二种方法无法准确地确定触控方式对应的情绪状态时,采用第一种方法确定触控方式对应的情绪状态,还可以根据采用这两种方法分别确定出的情绪状态,确定触控方式对应的情绪状态。
应理解,在实际应用中,除了可以采用上述两种方法为触控方式标记情绪状态的方式外,还可以根据实际需求,选择其他的方法确定触控方式所对应的情绪状态,在此不对标记情绪状态的方法做任何限定。
需要说明的是,对于同一个用户来说,其经常表现的情绪状态基本上是特定的,在特定的情绪状态下触控终端设备所采用的触控方式也是特定的;基于上述方法采集训练样本,可以保证所采集到的训练样本中包括的触控方式大多数为用户经常采用的触控方式,触控方式对应的情绪状态也多数属于用户经常表现的情绪状态,相应地,可以保证基于这些训练样本训练得到的情绪识别模型,能更敏感地根据用户触控终端设备时经常采用的触控方式,确定用户经常表现的情绪状态。
步骤202:采用机器学习算法,利用所述训练样本对分类模型进行训练,得到情绪识别模型;所述情绪识别模型以用户操控所述终端设备时的触控方式为输入,以该触控方 式对应的情绪状态为输出。
获取到用于训练情绪识别模型的训练样本后,终端设备可以采用机器学习算法,利用所获取的训练样本对预置在终端设备内的分类模型进行训练,以对该分类模型的模型参数进行不断地优化,待该分类模型满足训练结束条件后,根据该分类模型的模型结构和模型参数生成情绪识别模型。
具体训练情绪识别模型时,终端设备可以将训练样本中的触控方式输入分类模型,该分类模型通过对该触控方式进行分析处理,输出该触控方式对应的情绪状态,根据该分类模型输出的情绪状态和训练样本中触控方式对应的情绪状态构建损失函数,进而,根据该损失函数对分类模型中的模型参数进行调整,从而实现对分类模型的优化,当分类模型满足训练结束条件时,可以根据当前分类模型的模型结构和模型参数生成情绪识别模型。
具体判断分类模型是否满足训练结束条件时,可以利用测试样本对第一模型进行验证,测试样本与训练样本相类似,其中包括触控方式以及触控方式对应的情绪状态,该第一模型是利用多个训练样本对分类模型进行第一轮训练优化得到的模型;具体的,终端设备将测试样本中的触控方式输入该第一模型,利用该第一模型对触控方式进行相应地处理,得到该触控方式对应的情绪状态;进而,根据测试样本中触控方式对应的情绪状态和该第一模型输出的情绪状态计算预测准确率,当该预测准确率大于预设阈值时,即可认为第一模型的模型性能已能够满足需求,则可以根据该第一模型的模型参数以及模型结构,生成情绪识别模型。
应理解,上述预设阈值可以根据实际情况进行设定,在此不对该预设阈值做具体限定。
此外,判断分类模型是否满足训练结束条件时,还可以根据经多轮训练得到的多个模型,确定是否继续对分类模型进行训练,以获得模型性能最优的情绪识别模型。具体的,可以利用测试样本分别对经多轮训练得到的多个分类模型进行验证,若判断经各轮训练得到的模型的预测准确率之间的差距较小,则认为分类模型的性能已经没有提升空间,可以选取预测准确率最高的分类模型,根据该分类模型的模型参数和模型结构确定情绪识别模型;若经各轮训练得到的分类模型的预测准确率之间具有较大的差距,则认为该分类模型的性能还有提升的空间,可继续对该分类模型进行训练,直到获得模型性能最稳定且最优的情绪识别模型。
此外,终端设备还可以根据用户的反馈信息,确定分类模型是否满足训练结束条件。具体的,终端设备可以提示用户对正在训练的分类模型进行测试使用,并相应地反馈针对该分类模型的反馈信息,若用户针对该分类模型的反馈信息表征该分类模型目前的性能仍无法满足用户当前需求,则终端设备需要利用训练样本,对该分类模型继续进行优化训练;反之,若用户针对该分类模型的反馈信息表征该分类模型目前的性能已较好,基本满足用户当前需求,则终端设备可以根据该分类模型的模型结构和模型参数,生成情绪识别模型。
需要说明的是,用户触控终端设备的触控方式随着使用时间的增加,可能会发生改变,因此,在训练得到情绪识别模型后,终端设备还可以继续采集优化训练样本,并利用所采集的优化训练样本对情绪识别模型做进一步优化训练,以优化情绪识别模型的模型性能,使其能够更准确地根据用户的触控方式确定用户的情绪状态。
具体的,在得到情绪识别模型之后,终端设备可以继续获取用户操控终端设备时的触控方式,将其作为优化触控方式;并标记优化触控方式对应的情绪状态,具体标记情绪状态的方法可以参见步骤101中的相关描述,将优化触控方式以及优化触控方式对应的情绪状态作为优化训练样本,该优化训练样本用于对情绪识别模型做优化训练。
在一种可能的实现方式中,终端设备可以响应于用户的反馈信息,发起对情绪识别模型的优化训练。即,终端设备可以获取用户针对该情绪识别模型的反馈信息,该反馈信息用于表征该情绪识别模型的性能是否满足用户需求;在所获取的反馈信息表征情绪识别模型的性能不满足用户需求时,利用优化训练样本对该情绪识别模型进行优化训练。
具体的,终端设备可以定期发起反馈信息获取操作,例如,终端设备可以定期显示情绪识别模型反馈信息获取界面,以通过该界面获取用户针对情绪识别模型的反馈信息;当然,终端设备也可以通过其他方式获取反馈信息,在此不对反馈信息的获取方式做任何限定。
终端设备获取到反馈信息后,若确定反馈信息表征情绪识别模型当前的性能不满足用户的需求,则相应地获取优化训练样本,对该情绪识别模型做进一步优化训练;反之,若确定反馈信息表征情绪识别模型当前的性能已满足用户的需求,则暂时不对该情绪识别模型做进一步优化训练。
在另一种可能的实现方式中,终端设备可以直接在其自身处于充电状态时,和/或,在且其自身的剩余电量高于预设电量时,和/或,在其自身处于空闲状态的时长超过预设时长时,利用优化训练样本对情绪识别模型进行优化训练。
对情绪识别模型进行优化训练时需要耗费终端设备的电量,并且优化训练的过程可能会对终端设备的其他功能造成一定的影响,例如,影响终端设备上应用程序的运行速度;为了保证在不影响用户使用终端设备的情况下,对情绪识别模型及时地进行优化训练,终端设备可以在自身处于充电状态,利用优化训练样本对该情绪识别模型进行优化训练;或者,终端设备可以在其剩余电量高于预设电量时,利用优化训练样本对该情绪识别模型进行优化训练;或者,终端设备可以在其处于空闲状态的时长超过预设时长的情况下,利用优化训练样本对情绪识别模型进行优化训练,此处的空闲状态具体是指用户不使用终端设备时终端设备所处的状态;再或者,终端设备可以在满足自身处于充电状态、剩余电量高于预设电量以及空闲状态时长超过预设时长中任意两个条件或三个条件时,利用优化训练样本对情绪识别模型进行优化训练。
应理解,预设电量可以根据实际需求进行设定,在此不对预设电量的数值做具体限定;预设时长也可以根据实际需求进行设定,在此也不对预设时长的数值做具体限定。
应理解,在实际应用中,除了可以采用上述两种实现方式确定优化训练情绪识别模型的时机外,还可以根据其他条件确定优化训练情绪识别模型的时机,例如,在优化训练样本达到预设数量时即可对情绪识别模型进行优化训练,又例如,可以设置优化训练周期,按照该优化训练周期对情绪识别模型进行优化训练,在此不对确定优化训练情绪识别模型的时机的方式做任何限定。
上述本申请实施例提供的模型训练方法会针对不同的终端设备,相应地基于该终端设备的使用者操控该终端设备时的触控方式以及触控方式对应的情绪状态,采用机器学习算 法训练出有针对性地适用于该终端设备的情绪识别模型;如此,在终端设备上应用针对其训练得到的情绪识别模型,可以保证该情绪识别模型能够准确地根据该终端设备的使用者操控该终端设备时的触控方式,确定使用者的情绪状态。
基于上述实施例提供的模型训练方法,可以训练得到具备较好的模型性能的情绪识别模型,基于该情绪识别模型,本申请进一步提供了一种情绪识别方法,以便更清楚地了解上述情绪识别模型在实际应用中所起的作用。下面通过实施例对本申请提供的情绪识别方法做具体介绍。
参见图3,图3为本申请实施例提供的情绪识别方法的流程示意图。如图3所示,该情绪识别方法包括以下步骤:
步骤301:获取用户操控终端设备时的触控方式。
用户操控终端设备时,终端设备会相应地获取用户的触控方式,该触控方式也可以理解为触控操作,触控操作具体可以为用户针对触摸屏发起的单步触控操作,如不同力度下的点击操作、不同力度下的滑动操作等,也可以为用户针对触摸屏发起的连续触控操作,如不同频率的连续点击操作、不同频率的连续滑动操作等,当然,用户触控触摸屏时所采用的其他触控操作也可视为本申请中的触控方式,在此不对本申请中的触控方式做具体限定。
需要说明的是,通常情况下,上述触控方式是基于终端设备所获取的触控数据确定的,即在用户操控终端设备时,终端设备会获取到用户触控触摸屏产生的触控数据,进而,基于所获取的触控数据确定触控方式。
对于电容屏来说,触控数据通常包括屏幕电容值变化数据和屏幕坐标值变化数据,其中,屏幕电容值变化数据能够表征用户点击或滑动触摸屏时的力度,以及用户点击或滑动触摸屏时与触摸屏之间的接触面积;屏幕坐标值变化数据实际上也是根据屏幕电容值变化数据确定的,屏幕坐标值变化数据能够表征用户点击触摸屏时的点击位置,以及用户滑动触摸屏时的滑动方向和滑动距离。
相应地,终端设备获取到屏幕电容值变化数据和屏幕坐标值变化数据后,即可根据其确定用户当前触控终端设备的触控方式;例如,根据屏幕电容值变化数据的变化幅度,可以确定用户当前的触控方式为重度点击或是轻度点击,根据屏幕电容值变化数据的变化频率,可以确定用户当前的触控方式是否为频繁点击,根据屏幕坐标值变化数据所表征的滑动轨迹,可以确定用户当前的触控方式为大范围滑动或是小范围滑动,根据屏幕坐标值变化数据的变化频率,可以确定用户当前的触控方式是否为频繁滑动。当然,终端设备还可以根据触控数据相应地确定出其他触控方式,上述触控方式仅为示例,在此不对触控方式做具体限定。
应理解,对于其他类型的触摸屏来说,用户触控触摸屏将相应地产生其他触控数据,例如,对于电阻屏来说,用户触控电阻屏会相应地产生屏幕电阻值变化数据和屏幕坐标值变化数据,根据这些数据均可相应地确定用户当前的触控方式,在此也不对触控数据的具体类型做任何限定。
步骤302:利用情绪识别模型确定所述触控方式对应的情绪状态,作为用户当前的情 绪状态;所述情绪识别模型是执行图2所示的模型训练方法训练得到的。
终端设备获取到触控方式后,将所获取的触控方式输入至终端设备中运行的情绪识别模型,利用该情绪识别模型对所获取的触控方式进行分析处理,进而输出该触控方式对应的情绪状态,作为用户当前的情绪状态。
需要说明的是,上述情绪识别模型即为经图2所示的模型训练方法训练得到的模型,该模型在训练过程中,基于用户操控该终端设备时的触控数据和触控数据对应的情绪状态,训练得到有针对性地适用于该终端设备的情绪识别模型,该情绪识别模型能够准确地根据用户操控终端设备时的触控方式,确定用户的情绪状态。
应理解,情绪识别模型所能识别的情绪状态,取决于训练该情绪识别模型时所采用的训练样本;而训练样本中所包括的触控方式是用户操控该终端设备时的触控方式,训练样本中所包括的情绪状态是用户使用该终端设备时的情绪状态,即该训练样本完全是基于该终端设备的用户的触控方式和其表现出的情绪状态生成的。相应地,利用该训练样本训练得到的情绪识别模型,能够准确地根据该用户操控终端设备时的触控方式,确定用户当前的情绪状态,即利用该训练样本训练得到的情绪识别模型,能够敏感地根据该用户惯用的触控方式,识别出其所对应的情绪状态。
利用情绪识别模型识别出用户当前的情绪状态后,终端设备即可相应地根据所识别出的用户当前的情绪状态,为用户进一步提供个性化服务,以提高终端设备为用户带来的用户体验。
在一种可能的实现方式中,终端设备可以在其自身显示桌面界面的情况下,切换桌面界面的显示样式;例如,切换桌面界面的显示主题、显示壁纸、显示字体等。
例如,当终端设备获取到用户的触控方式为频繁地滑动触摸屏,将该触控方式输入情绪识别模型,情绪识别模型可能确定该触控方式对应的情绪状态为烦躁;此时,若终端设备显示的界面为桌面界面,终端设备则可以相应地将桌面的壁纸切换为较为明亮、令人愉悦的图片,或者,终端设备也可以更换显示主题和/或显示字体,以为用户带来愉悦的观感体验。
当然,终端设备也可以根据用户当前的情绪状态,对桌面界面上其他的显示样式进行更改,在此不对所能更改的显示样式做任何限定。
在另一种可能的实现方式中,终端设备可以在自身开启应用程序的情况下,通过该应用程序为用户推荐相关内容。
例如,假设终端设备当前开启的应用程序为音乐播放程序,相应地,若情绪识别模型根据用户的触控方式,确定用户当前的情绪状态为低落,则该音乐播放程序可以为用户推荐一些欢快的音乐,以缓解用户当前低落的情绪;或者,假设终端设备当前开启的应用程序为视频播放程序,相应地,若情绪识别模型根据用户的触控方式,确定用户当前的情绪状态为难过,则该视频播放程序可以为用户推荐一些搞笑的视频,以调节用户当前难过的情绪。当然,终端设备还可以通过其他应用程序,根据用户当前的情绪状态,相应地为用户推荐相关文字内容,例如,为用户推荐相关文章、笑话等。
在此不对能够根据用户情绪状态推荐相关内容的应用程序做任何限定,也不对应用程序所推荐的相关内容做具体限定。
应理解,除了上述两种可能的实现方式外,终端设备还可以根据实际情况,相应地采取其他方式,根据用户当前的情绪状态为其提供合理的个性化服务,例如,推荐用户进行相关可以缓解情绪的操作等等,在此不对终端设备所能提供的个性化服务做具体限定。
在本申请实施例提供的情绪识别方法中,终端设备利用针对自身训练得到的情绪识别模型,根据用户操控自身时的触控方式,确定用户当前的情绪状态。相比现有技术中常用的情绪识别方法,该方法能够利用情绪识别模型,有针对性地根据用户操控终端设备时的触控方式确定用户的情绪状态,保证所确定的情绪状态的准确性;并且,该方法在确定用户情绪状态的过程中,无需任何额外的外接设备,真正意义上实现了提高用户体验的目的。
针对上文描述的模型训练方法,本申请还提供了对应的模型训练装置,以使上述模型训练方法在实际中的应用以及实现。
参见图4,图4为本申请实施例提供的一种模型训练装置的结构示意图;如图4所示,该模型训练装置包括:
训练样本获取模块401,用于获取用户操控终端设备时的触控方式,标记所述触控方式对应的情绪状态;将所述触控方式以及所述触控方式对应的情绪状态,作为训练样本;
模型训练模块402,用于采用机器学习算法,利用所述训练样本对分类模型进行训练,得到情绪识别模型;所述情绪识别模型以用户操控所述终端设备时的触控方式为输入,以该触控方式对应的情绪状态为输出。
具体实现时,训练样本获取模块401具体可以用于执行步骤201中的方法,具体请参考图2所示的方法实施例中对步骤201部分的描述;模型训练模块402具体可以用于执行步骤202中的方法,具体请参考图2所示的方法实施例中对步骤202部分的描述,此处不再赘述。
可选的,所述训练样本获取模块401具体用于:
根据所述触控方式对应的触发时间,确定参考时间区间;
获取所述参考时间区间内用户操作所述终端设备生成的操作数据内容;
根据所述操作数据内容确定用户的情绪状态,作为所述触控方式对应的情绪状态。
具体实现时,训练样本获取模块401可以参考图2所示的实施例中关于确定触控方式对应的情绪状态的相关内容的描述。
可选的,所述训练样本获取模块401具体用于:
调用预置的情绪状态映射关系表;所述情绪状态映射关系表中记录有触控方式与情绪状态之间的对应关系;
查找所述情绪状态映射关系表,确定所述触控方式对应的情绪状态。
具体实现时,训练样本获取模块401可以参考图2所示的实施例中关于确定触控方式对应的情绪状态的相关内容的描述。
可选的,所述训练样本获取模块401具体用于:
在预设时间段内,采集用户操控所述终端设备产生的触控数据;
对所述触控数据做聚类处理生成触控数据集合,确定所述触控数据集合对应的触控方式;
将包括触控数据最多的触控数据集合作为目标触控数据集合,将所述目标触控数据集合对应的触控方式作为目标触控方式;标记所述目标触控方式对应的情绪状态;
将所述目标触控方式以及所述目标触控方式对应的情绪状态,作为训练样本。
具体实现时,训练样本获取模块401可以参考图2所示的实施例中关于确定触控方式对应的情绪状态的相关内容的描述。
可选的,所述触控数据包括:屏幕电容值变化数据及坐标值变化数据。
可选的,所述装置还包括:
优化训练样本获取模块,用于获取用户操控所述终端设备时的触控方式,作为优化触控方式;标记所述优化触控方式对应的情绪状态;将所述优化触控方式和所述优化触控方式对应的情绪状态,作为优化训练样本;所述优化训练样本用于对所述情绪识别模型进行优化训练。
具体实现时,优化训练样本获取模块可以参考图2所示的实施例中关于获取优化训练样本的相关内容的描述。
可选的,所述装置还包括:
反馈信息获取模块,用于获取用户针对所述情绪识别模型的反馈信息;所述反馈信息用于表征所述情绪识别模型的性能是否满足用户需求;
第一优化训练模块,用于在所述反馈信息表征所述情绪识别模型的性能不满足用户需求时,利用所述优化训练样本对所述情绪识别模型进行优化训练。
具体实现时,反馈信息获取模块以及第一优化训练模块具体可以参考图2所示的实施例中关于对情绪识别模型进行优化训练的相关内容的描述。
可选的,所述装置还包括:
第二优化训练模块,用于在所述终端设备处于充电状态时,和/或,在所述终端设备的剩余电量高于预设电量时,和/或,在所述终端设备处于空闲状态的时长超过预设时长时,利用所述优化训练样本对所述情绪识别模型进行优化训练。
具体实现时,反馈信息获取模块以及第一优化训练模块具体可以参考图2所示的实施例中关于对情绪识别模型进行优化训练的相关内容的描述。
上述本申请实施例提供的模型训练装置会针对不同的终端设备,相应地基于该终端设备的使用者操控该终端设备时的触控方式以及触控方式对应的情绪状态,采用机器学习算法训练出有针对性地适用于该终端设备的情绪识别模型;如此,在终端设备上应用针对其训练得到的情绪识别模型,可以保证该情绪识别模型能够准确地根据该终端设备的使用者操控该终端设备时的触控方式,确定使用者的情绪状态。
针对上文描述的情绪识别方法,本申请还提供了对应的情绪识别装置,以使上述情绪识别方法在实际中的应用以及实现。
参见图5,图5为本申请实施例提供的一种情绪识别装置的结构示意图;如图5所示,该情绪识别装置包括:
触控方式获取模块501,用于获取用户操控终端设备时的触控方式;
情绪状态识别模块502,用于利用情绪识别模型确定所述触控方式对应的情绪状态, 作为用户当前的情绪状态;所述情绪识别模型是执行图2所述的模型训练方法训练得到的。
具体实现时,触控方式获取模块501具体可以用于执行步骤301中的方法,具体请参考图3所示的方法实施例中对步骤301部分的描述;情绪状态识别模块502具体可以用于执行步骤302中的方法,具体请参考图3所示的方法实施例中对步骤302部分的描述,此处不再赘述。
可选的,所述装置还包括:
显示样式切换模块,用于在所述终端设备显示桌面界面的情况下,根据所述用户当前的情绪状态,切换桌面界面的显示样式。
具体实现时,显示样式切换模块具体可以参考图3所示的实施例中关于切换桌面界面显示样式的相关内容的描述。
可选的,所述装置还包括:
内容推荐模块,用于在所述终端设备开启应用程序的情况下,根据所述用户当前的情绪状态,通过所述应用程序推荐相关内容。
具体实现时,内容推荐模块具体可以参考图3所示的实施例中关于通过应用程序推荐相关内容的描述。
在本申请实施例提供的情绪识别装置中,终端设备利用针对自身训练得到的情绪识别模型,根据用户操控自身时的触控方式,确定用户当前的情绪状态。该装置能够利用情绪识别模型,有针对性地根据用户操控终端设备时的触控方式确定用户的情绪状态,保证所确定的情绪状态的准确性;并且,该装置在确定用户情绪状态的过程中,无需任何额外的外接设备,真正意义上实现了提高用户体验的目的。
本申请还提供了一种用于训练模型的服务器;参见图6,图6是本申请实施例提供的一种用于训练模型的服务器结构示意图,该服务器600可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上中央处理器(central processing units,CPU)622(例如,一个或一个以上处理器)和存储器632,一个或一个以上存储应用程序642或数据644的存储介质630(例如一个或一个以上海量存储设备)。其中,存储器632和存储介质630可以是短暂存储或持久存储。存储在存储介质630的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对服务器中的一系列指令操作。更进一步地,中央处理器622可以设置为与存储介质630通信,在服务器600上执行存储介质630中的一系列指令操作。
服务器600还可以包括一个或一个以上电源626,一个或一个以上有线或无线网络接口650,一个或一个以上输入输出接口658,和/或,一个或一个以上操作系统641,例如Windows ServerTM,Mac OS XTM,UnixTM, LinuxTM,FreeBSDTM等等。
上述实施例中由服务器所执行的步骤可以基于该图6所示的服务器结构。
其中,CPU 622用于执行如下步骤:
获取用户操控终端设备时的触控方式,标记所述触控方式对应的情绪状态;将所述触控方式以及所述触控方式对应的情绪状态,作为训练样本;
采用机器学习算法,利用所述训练样本对分类模型进行训练,得到情绪识别模型;所 述情绪识别模型以用户操控所述终端设备时的触控方式为输入,以该触控方式对应的情绪状态为输出。
可选的,CPU622还可以执行本申请实施例中模型训练方法任一具体实现方式的方法步骤。
需要说明的是,采用图6所示的服务器训练情绪识别模型时,服务器需要与终端设备进行通讯,以从终端设备处获取训练样本,应理解,来自不同的终端设备的训练样本应该相应地配置其对应的终端设备的标识,以便服务器的CPU622可以利用来自同一终端设备的训练样本,采用本申请实施例提供的模型训练方法训练适用于该终端设备的情绪识别模型。
本申请实施例还提供了另一种用于训练模型以及识别情绪的电子设备(该电子设备可以为上文所述的终端设备),用于执行本申请实施例提供的模型训练方法,训练适用于自身的情绪识别模型;和/或,执行本申请实施例提供的情绪识别方法,利用所训练的情绪识别模型,根据用户操控自身的触控方式,相应地识别用户当前的情绪状态。
图7示出了上述电子设备100的结构示意图。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或 数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也 可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT), 全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system ,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP 用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实 施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备100的各种功能应用以及数据处理。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触控操作。触摸传感器可以将检测到的触控操作传递给应用处理器,以确定触控方式。可以通过显示屏194提供与触控操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备 100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。
图8是本发明实施例的电子设备100的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图8所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图8所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程 管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
下面结合捕获拍照场景,示例性说明电子设备100软件以及硬件的工作流程。
当触摸传感器180K接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸单击操作,该单击操作所对应的控件为相机应用图标的控件为例,相机应用调用应用框架层的接口,启动相机应用,进而通过调用内核层启动摄像头驱动,通过摄像头193捕获静态图像或视频。
本申请实施例还提供一种计算机可读存储介质,用于存储程序代码,该程序代码用于执行前述各个实施例所述的模型训练方法中的任意一种实施方式,和/或情绪识别方法中的任意一种实施方式。
本申请实施例还提供一种包括指令的计算机程序产品,当其在计算机上运行时,使得计算机执行前述各个实施例所述的模型训练方法中的任意一种实施方式,和/或情绪识别方法中的任意一种实施方式。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(英文全称:Read-Only Memory,英文缩写:ROM)、随机存取存储器(英文全称:Random Access Memory,英文缩写:RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (24)

  1. 一种模型训练方法,其特征在于,所述方法包括:
    获取用户操控终端设备时的触控方式,标记所述触控方式对应的情绪状态;将所述触控方式以及所述触控方式对应的情绪状态,作为训练样本;
    采用机器学习算法,利用所述训练样本对分类模型进行训练,得到情绪识别模型;所述情绪识别模型以用户操控所述终端设备时的触控方式为输入,以该触控方式对应的情绪状态为输出。
  2. 根据权利要求1所述的方法,其特征在于,所述标记所述触控方式对应的情绪状态,包括:
    根据所述触控方式对应的触发时间,确定参考时间区间;
    获取所述参考时间区间内用户操作所述终端设备生成的操作数据内容;
    根据所述操作数据内容确定用户的情绪状态,作为所述触控方式对应的情绪状态。
  3. 根据权利要求1所述的方法,其特征在于,所述标记所述触控方式对应的情绪状态,包括:
    调用预置的情绪状态映射关系表;所述情绪状态映射关系表中记录有触控方式与情绪状态之间的对应关系;
    查找所述情绪状态映射关系表,确定所述触控方式对应的情绪状态。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,所述获取用户操控终端设备时的触控方式,标记所述触控方式对应的情绪状态;将所述触控方式以及所述触控方式对应的情绪状态,作为训练样本,包括:
    在预设时间段内,采集用户操控所述终端设备产生的触控数据;
    对所述触控数据做聚类处理生成触控数据集合,确定所述触控数据集合对应的触控方式;
    将包括触控数据最多的触控数据集合作为目标触控数据集合,将所述目标触控数据集合对应的触控方式作为目标触控方式;标记所述目标触控方式对应的情绪状态;
    将所述目标触控方式以及所述目标触控方式对应的情绪状态,作为训练样本。
  5. 根据权利要求4所述的方法,其特征在于,所述触控数据包括:屏幕电容值变化数据及坐标值变化数据。
  6. 根据权利要求1所述的方法,其特征在于,在得到所述得到情绪识别模型之后,所述方法还包括:
    获取用户操控所述终端设备时的触控方式,作为优化触控方式;标记所述优化触控方式对应的情绪状态;将所述优化触控方式和所述优化触控方式对应的情绪状态,作为优化训练样本;所述优化训练样本用于对所述情绪识别模型进行优化训练。
  7. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    获取用户针对所述情绪识别模型的反馈信息;所述反馈信息用于表征所述情绪识别模型的性能是否满足用户需求;
    在所述反馈信息表征所述情绪识别模型的性能不满足用户需求时,利用所述优化训练 样本对所述情绪识别模型进行优化训练。
  8. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    在所述终端设备处于充电状态时,和/或,在所述终端设备的剩余电量高于预设电量时,和/或,在所述终端设备处于空闲状态的时长超过预设时长时,利用所述优化训练样本对所述情绪识别模型进行优化训练。
  9. 一种情绪识别方法,其特征在于,所述方法包括:
    获取用户操控终端设备时的触控方式;
    利用情绪识别模型确定所述触控方式对应的情绪状态,作为用户当前的情绪状态;所述情绪识别模型是执行权利要求1至8任一项所述的模型训练方法训练得到的。
  10. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    在所述终端设备显示桌面界面的情况下,根据所述用户当前的情绪状态,切换所述桌面界面的显示样式。
  11. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    在所述终端设备开启应用程序的情况下,根据所述用户当前的情绪状态,通过所述应用程序推荐相关内容。
  12. 一种模型训练装置,其特征在于,所述装置包括:
    训练样本获取模块,用于获取用户操控终端设备时的触控方式,标记所述触控方式对应的情绪状态;将所述触控方式以及所述触控方式对应的情绪状态,作为训练样本;
    模型训练模块,用于采用机器学习算法,利用所述训练样本对分类模型进行训练,得到情绪识别模型;所述情绪识别模型以用户操控所述终端设备时的触控方式为输入,以该触控方式对应的情绪状态为输出。
  13. 根据权利要求12所述的装置,其特征在于,所述训练样本获取模块具体用于:
    根据所述触控方式对应的触发时间,确定参考时间区间;
    获取所述参考时间区间内用户操作所述终端设备生成的操作数据内容;
    根据所述操作数据内容确定用户的情绪状态,作为所述触控方式对应的情绪状态。
  14. 根据权利要求12所述的装置,其特征在于,所述训练样本获取模块具体用于:
    调用预置的情绪状态映射关系表;所述情绪状态映射关系表中记录有触控方式与情绪状态之间的对应关系;
    查找所述情绪状态映射关系表,确定所述触控方式对应的情绪状态。
  15. 根据权利要求12至14任一项所述的装置,其特征在于,所述训练样本获取模块具体用于:
    在预设时间段内,采集用户操控所述终端设备产生的触控数据;
    对所述触控数据做聚类处理生成触控数据集合,确定所述触控数据集合对应的触控方式;
    将包括触控数据最多的触控数据集合作为目标触控数据集合,将所述目标触控数据集合对应的触控方式作为目标触控方式;标记所述目标触控方式对应的情绪状态;
    将所述目标触控方式以及所述目标触控方式对应的情绪状态,作为训练样本。
  16. 根据权利要求15所述的装置,其特征在于,所述触控数据包括:屏幕电容值变 化数据及坐标值变化数据。
  17. 根据权利要求12所述的装置,其特征在于,所述装置还包括:
    优化训练样本获取模块,用于获取用户操控所述终端设备时的触控方式,作为优化触控方式;标记所述优化触控方式对应的情绪状态;将所述优化触控方式和所述优化触控方式对应的情绪状态,作为优化训练样本;所述优化训练样本用于对所述情绪识别模型进行优化训练。
  18. 根据权利要求17所述的装置,其特征在于,所述装置还包括:
    反馈信息获取模块,用于获取用户针对所述情绪识别模型的反馈信息;所述反馈信息用于表征所述情绪识别模型的性能是否满足用户需求;
    第一优化训练模块,用于在所述反馈信息表征所述情绪识别模型的性能不满足用户需求时,利用所述优化训练样本对所述情绪识别模型进行优化训练。
  19. 根据权利要求17所述的装置,其特征在于,所述装置还包括:
    第二优化训练模块,用于在所述终端设备处于充电状态时,和/或,在所述终端设备的剩余电量高于预设电量时,和/或,在所述终端设备处于空闲状态的时长超过预设时长时,利用所述优化训练样本对所述情绪识别模型进行优化训练。
  20. 一种情绪识别装置,其特征在于,所述装置包括:
    触控方式获取模块,用于获取用户操控终端设备时的触控方式;
    情绪状态识别模块,用于利用情绪识别模型确定所述触控方式对应的情绪状态,作为用户当前的情绪状态;所述情绪识别模型是执行权利要求1至8任一项所述的模型训练方法训练得到的。
  21. 根据权利要求20所述的装置,其特征在于,所述装置还包括:
    显示样式切换模块,用于在所述终端设备显示桌面界面的情况下,根据所述用户当前的情绪状态,切换桌面界面的显示样式。
  22. 根据权利要求20所述的装置,其特征在于,所述装置还包括:
    内容推荐模块,用于在所述终端设备开启应用程序的情况下,根据所述用户当前的情绪状态,通过所述应用程序推荐相关内容。
  23. 一种电子设备,其特征在于,所述终端设备包括处理器以及存储器:
    所述存储器用于存储程序代码,并将所述程序代码传输给所述处理器;
    所述处理器用于根据所述程序代码中的指令执行权利要求1至8任一项所述的模型训练方法,和/或,执行权利要求9至11任一项所述的情绪识别方法。
  24. 一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行如权利要求1至8任一项所述的模型训练方法,和/或,执行权利要求9至11任一项所述的情绪识别方法。
PCT/CN2020/084216 2019-04-17 2020-04-10 模型训练方法、情绪识别方法及相关装置和设备 WO2020211701A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910309245.5 2019-04-17
CN201910309245.5A CN110134316B (zh) 2019-04-17 2019-04-17 模型训练方法、情绪识别方法及相关装置和设备

Publications (1)

Publication Number Publication Date
WO2020211701A1 true WO2020211701A1 (zh) 2020-10-22

Family

ID=67570305

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/084216 WO2020211701A1 (zh) 2019-04-17 2020-04-10 模型训练方法、情绪识别方法及相关装置和设备

Country Status (2)

Country Link
CN (1) CN110134316B (zh)
WO (1) WO2020211701A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744738A (zh) * 2021-09-10 2021-12-03 安徽淘云科技股份有限公司 一种人机交互方法及其相关设备
CN114363049A (zh) * 2021-12-30 2022-04-15 武汉杰创达科技有限公司 基于个性化交互差异的物联设备多id识别方法
CN115496113A (zh) * 2022-11-17 2022-12-20 深圳市中大信通科技有限公司 一种基于智能算法的情绪行为分析方法
EP4202614A1 (en) * 2021-07-07 2023-06-28 Honor Device Co., Ltd. Method for adjusting touch panel sampling rate, and electronic device

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110134316B (zh) * 2019-04-17 2021-12-24 华为技术有限公司 模型训练方法、情绪识别方法及相关装置和设备
CN114223139B (zh) * 2019-10-29 2023-11-24 深圳市欢太科技有限公司 界面切换方法、装置、可穿戴电子设备及存储介质
CN111166290A (zh) * 2020-01-06 2020-05-19 华为技术有限公司 一种健康状态检测方法、设备和计算机存储介质
CN111530081B (zh) * 2020-04-17 2023-07-25 成都数字天空科技有限公司 游戏关卡设计方法、装置、存储介质及电子设备
CN111626191B (zh) * 2020-05-26 2023-06-30 深圳地平线机器人科技有限公司 模型生成方法、装置、计算机可读存储介质及电子设备
CN112906555B (zh) * 2021-02-10 2022-08-05 华南师范大学 因人而异地识别表情的人工智能心理机器人和方法
CN113656635B (zh) * 2021-09-03 2024-04-09 咪咕音乐有限公司 视频彩铃合成方法、装置、设备及计算机可读存储介质
CN113791690B (zh) * 2021-09-22 2024-03-29 入微智能科技(南京)有限公司 一种带有实时情绪识别功能的人机交互公共设备
CN116662638B (zh) * 2022-09-06 2024-04-12 荣耀终端有限公司 数据采集方法及相关装置
CN115611393B (zh) * 2022-11-07 2023-04-07 中节能晶和智慧城市科技(浙江)有限公司 一种多端协同的多水厂混凝剂投放方法和系统
CN115457645B (zh) * 2022-11-11 2023-03-24 青岛网信信息科技有限公司 一种基于交互验证的用户情绪分析方法、介质及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105549885A (zh) * 2015-12-10 2016-05-04 重庆邮电大学 滑屏操控中用户情绪的识别方法和装置
CN106055236A (zh) * 2016-05-30 2016-10-26 努比亚技术有限公司 一种内容推送方法及终端
US20170160813A1 (en) * 2015-12-07 2017-06-08 Sri International Vpa with integrated object recognition and facial expression recognition
CN107608956A (zh) * 2017-09-05 2018-01-19 广东石油化工学院 一种基于cnn‑grnn的读者情绪分布预测算法
CN108073336A (zh) * 2016-11-18 2018-05-25 香港中文大学 基于触摸的用户情绪检测系统和方法
CN110134316A (zh) * 2019-04-17 2019-08-16 华为技术有限公司 模型训练方法、情绪识别方法及相关装置和设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103926997A (zh) * 2013-01-11 2014-07-16 北京三星通信技术研究有限公司 一种基于用户的输入确定情绪信息的方法和终端
US10127927B2 (en) * 2014-07-28 2018-11-13 Sony Interactive Entertainment Inc. Emotional speech processing
CN106528538A (zh) * 2016-12-07 2017-03-22 竹间智能科技(上海)有限公司 智能识别情绪的方法及装置
CN111459290B (zh) * 2018-01-26 2023-09-19 上海智臻智能网络科技股份有限公司 交互意图确定方法及装置、计算机设备及存储介质
CN108334583B (zh) * 2018-01-26 2021-07-09 上海智臻智能网络科技股份有限公司 情感交互方法及装置、计算机可读存储介质、计算机设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170160813A1 (en) * 2015-12-07 2017-06-08 Sri International Vpa with integrated object recognition and facial expression recognition
CN105549885A (zh) * 2015-12-10 2016-05-04 重庆邮电大学 滑屏操控中用户情绪的识别方法和装置
CN106055236A (zh) * 2016-05-30 2016-10-26 努比亚技术有限公司 一种内容推送方法及终端
CN108073336A (zh) * 2016-11-18 2018-05-25 香港中文大学 基于触摸的用户情绪检测系统和方法
CN107608956A (zh) * 2017-09-05 2018-01-19 广东石油化工学院 一种基于cnn‑grnn的读者情绪分布预测算法
CN110134316A (zh) * 2019-04-17 2019-08-16 华为技术有限公司 模型训练方法、情绪识别方法及相关装置和设备

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4202614A1 (en) * 2021-07-07 2023-06-28 Honor Device Co., Ltd. Method for adjusting touch panel sampling rate, and electronic device
CN113744738A (zh) * 2021-09-10 2021-12-03 安徽淘云科技股份有限公司 一种人机交互方法及其相关设备
CN113744738B (zh) * 2021-09-10 2024-03-19 安徽淘云科技股份有限公司 一种人机交互方法及其相关设备
CN114363049A (zh) * 2021-12-30 2022-04-15 武汉杰创达科技有限公司 基于个性化交互差异的物联设备多id识别方法
CN115496113A (zh) * 2022-11-17 2022-12-20 深圳市中大信通科技有限公司 一种基于智能算法的情绪行为分析方法
CN115496113B (zh) * 2022-11-17 2023-04-07 深圳市中大信通科技有限公司 一种基于智能算法的情绪行为分析方法

Also Published As

Publication number Publication date
CN110134316B (zh) 2021-12-24
CN110134316A (zh) 2019-08-16

Similar Documents

Publication Publication Date Title
WO2020211701A1 (zh) 模型训练方法、情绪识别方法及相关装置和设备
KR102470275B1 (ko) 음성 제어 방법 및 전자 장치
WO2021052263A1 (zh) 语音助手显示方法及装置
CN110910872B (zh) 语音交互方法及装置
WO2020259452A1 (zh) 一种移动终端的全屏显示方法及设备
EP3893129A1 (en) Recommendation method based on user exercise state, and electronic device
CN113645351B (zh) 应用界面交互方法、电子设备和计算机可读存储介质
WO2021258814A1 (zh) 视频合成方法、装置、电子设备及存储介质
WO2021052139A1 (zh) 手势输入方法及电子设备
WO2022042766A1 (zh) 信息显示方法、终端设备及计算机可读存储介质
WO2021082815A1 (zh) 一种显示要素的显示方法和电子设备
WO2023273543A1 (zh) 一种文件夹管理方法及装置
WO2022007707A1 (zh) 家居设备控制方法、终端设备及计算机可读存储介质
WO2020062014A1 (zh) 一种向输入框中输入信息的方法及电子设备
CN110058729B (zh) 调节触摸检测的灵敏度的方法和电子设备
WO2023029916A1 (zh) 批注展示方法、装置、终端设备及可读存储介质
CN114995715B (zh) 悬浮球的控制方法和相关装置
WO2022242412A1 (zh) 杀应用的方法及相关设备
WO2022078116A1 (zh) 笔刷效果图生成方法、图像编辑方法、设备和存储介质
WO2022007757A1 (zh) 跨设备声纹注册方法、电子设备及存储介质
CN116450026B (zh) 用于识别触控操作的方法和系统
CN115359156B (zh) 音频播放方法、装置、设备和存储介质
WO2021129453A1 (zh) 一种截屏方法及相关设备
WO2024012346A1 (zh) 任务迁移的方法、电子设备和系统
WO2022042774A1 (zh) 头像显示方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20790861

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20790861

Country of ref document: EP

Kind code of ref document: A1