WO2021081768A1 - 界面切换方法、装置、可穿戴电子设备及存储介质 - Google Patents

界面切换方法、装置、可穿戴电子设备及存储介质 Download PDF

Info

Publication number
WO2021081768A1
WO2021081768A1 PCT/CN2019/114076 CN2019114076W WO2021081768A1 WO 2021081768 A1 WO2021081768 A1 WO 2021081768A1 CN 2019114076 W CN2019114076 W CN 2019114076W WO 2021081768 A1 WO2021081768 A1 WO 2021081768A1
Authority
WO
WIPO (PCT)
Prior art keywords
behavior
electronic device
user interface
wearable electronic
scene
Prior art date
Application number
PCT/CN2019/114076
Other languages
English (en)
French (fr)
Inventor
陈�田
Original Assignee
深圳市欢太科技有限公司
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市欢太科技有限公司, Oppo广东移动通信有限公司 filed Critical 深圳市欢太科技有限公司
Priority to CN201980099239.XA priority Critical patent/CN114223139B/zh
Priority to PCT/CN2019/114076 priority patent/WO2021081768A1/zh
Publication of WO2021081768A1 publication Critical patent/WO2021081768A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • This application relates to the technical field of mobile terminals, and more specifically, to an interface switching method, device, wearable electronic equipment, and storage medium.
  • Wearable electronic devices such as smart watches, smart glasses, and smart bracelets, have become one of the most commonly used consumer electronic products in people's daily lives. Wearable electronic devices have been favored by more and more consumers due to their convenience to wear and the ability to provide users with more user-friendly services. However, the traditional wearable electronic device is cumbersome to operate when the user interface is switched, which brings inconvenience to the user.
  • this application proposes an interface switching method, device, wearable electronic device, and storage medium.
  • an embodiment of the present application provides an interface switching method, which is applied to a wearable electronic device, the wearable electronic device includes a sensor for collecting behavior data, and the method includes: acquiring behavior collected by the sensor Data; perform feature extraction on the behavior data to obtain behavior characteristics; input the behavior characteristics into a trained preset model to obtain the current behavior scene in which the wearable electronic device is located, and the preset model is pre-trained , To output the recognition result of the behavior scene according to the input behavior characteristics; switch the current interface displayed by the wearable electronic device to the user interface corresponding to the current behavior scene.
  • an embodiment of the present application provides an interface switching device, which is applied to a wearable electronic device, the wearable electronic device includes a sensor for collecting behavior data, and the device includes: a data acquisition module and a feature acquisition module A scene recognition module and an interface switching module, wherein the data acquisition module is used to acquire behavior data collected by the sensor; the feature acquisition module is used to perform feature extraction on the behavior data to obtain behavior characteristics; the scene The recognition module is used to input the behavior feature into a trained preset model to obtain the current behavior scene in which the wearable electronic device is located, and the preset model is pre-trained to output the behavior scene according to the input behavior feature The recognition result; the interface switching module is used to switch the current interface displayed by the wearable electronic device to the user interface corresponding to the current behavior scene.
  • an embodiment of the present application provides a wearable electronic device, including: one or more processors; a memory; one or more application programs, wherein the one or more application programs are stored in the memory And is configured to be executed by the one or more processors, and the one or more programs are configured to execute the interface switching method provided in the above-mentioned first aspect.
  • an embodiment of the present application provides a computer-readable storage medium.
  • the computer-readable storage medium stores program code, and the program code can be invoked by a processor to execute the interface provided in the first aspect. Switch method.
  • the solution provided in this application obtains the behavior data collected by the sensor of the wearable electronic device, extracts the characteristics of the behavior data, obtains the behavior characteristics, and inputs the behavior characteristics into the trained preset model to obtain the current state of the wearable electronic device.
  • Behavior scene the preset model is pre-trained to output the recognition result of the behavior scene according to the input behavior characteristics, and then the current interface displayed by the wearable electronic device is switched to the user interface corresponding to the current behavior scene, so as to realize according to the sensor
  • the behavior characteristics corresponding to the collected behavior data automatically identify the current behavior scene where the wearable electronic device is located, and switch the displayed current interface to the user interface corresponding to the current behavior scene, reducing user operations in switching user interfaces and improving user experience .
  • Fig. 1 shows a flowchart of an interface switching method according to an embodiment of the present application.
  • Fig. 2 shows a schematic diagram of an interface provided by an embodiment of the present application.
  • Figure 3 shows a schematic diagram of another interface provided by an embodiment of the present application.
  • FIG. 4 shows a schematic diagram of another interface provided by an embodiment of the present application.
  • Fig. 5 shows a flowchart of an interface switching method according to another embodiment of the present application.
  • Fig. 6 shows a flowchart of an interface switching method according to another embodiment of the present application.
  • FIG. 7 shows a flowchart of step S310 in the interface switching method provided by another embodiment of the present application.
  • Fig. 8 shows a flowchart of an interface switching method according to still another embodiment of the present application.
  • Fig. 9 shows a block diagram of an interface switching device according to an embodiment of the present application.
  • Fig. 10 is a block diagram of a wearable electronic device for executing the interface switching method according to the embodiment of the present application.
  • FIG. 11 is a storage unit for storing or carrying program code for implementing the interface switching method according to the embodiment of the present application according to an embodiment of the present application.
  • wearable electronic devices such as smart watches, etc.
  • the wearable electronic device may not be limited to the display of time, but may also display some information, such as weather information, health information, prompt information, and so on. Therefore, the style of the user interface (such as the dial, etc.) of the wearable electronic device is different, and different users have different requirements for the style of the user interface, which makes a style of the user interface unable to meet the needs of the user, and the wearable electronics There are usually multiple styles of user interfaces in the device.
  • the inventor proposes the interface switching method, device, wearable electronic device, and storage medium provided by the embodiments of the present application.
  • the behavior characteristics corresponding to the behavior data collected by the sensor the current behavior of the wearable electronic device is automatically recognized. Scene, and switch the displayed current interface to the user interface corresponding to the current behavior scene, reducing the user's operation in switching the user interface and improving the user experience.
  • the specific interface switching method will be described in detail in the subsequent embodiments.
  • FIG. 1 shows a schematic flowchart of an interface switching method provided by an embodiment of the present application.
  • the interface switching method is used for automatically identifying the current behavior scene of the wearable electronic device according to the behavior characteristics corresponding to the behavior data collected by the sensor, and switching the displayed current interface to the user interface corresponding to the current behavior scene, reducing switching users
  • the user's operation in the interface enhances the user experience.
  • the interface switching method is applied to the interface switching device 400 as shown in FIG. 9 and the wearable electronic device 100 equipped with the interface switching device 400 (FIG. 10 ).
  • the wearable electronic device may include sensors for collecting behavior data, and the sensors may include acceleration sensors, gyroscope sensors, gravity sensors, heart rate sensors, brain wave sensors, positioning sensors, infrared sensors, etc., which are not limited here.
  • the sensors may include acceleration sensors, gyroscope sensors, gravity sensors, heart rate sensors, brain wave sensors, positioning sensors, infrared sensors, etc., which are not limited here.
  • the following will take an electronic device as an example to describe the specific process of this embodiment.
  • the wearable electronic device applied in this embodiment can be a smart watch, a smart bracelet, etc., which is not limited here.
  • the following will elaborate on the process shown in FIG. 1, and the interface switching method may specifically include the following steps:
  • Step S110 Obtain behavior data collected by the sensor.
  • various sensors for collecting behavior data may be provided in the mobile terminal, such as acceleration sensors, gyroscope sensors, gravity sensors, heart rate sensors, brain wave sensors, positioning sensors, infrared sensors, etc.
  • behavior data may refer to data used to characterize user behavior.
  • User behaviors may include different types of user behaviors such as walking, standing still, running, squatting, hand movement, head shaking, etc. The specific user behavior may not be limited. Under different user behaviors, the behavior data collected by these sensors can be different. And behavioral data can reflect the characteristics of some behavioral scenes. For example, the scene when the user is walking on the route to the home location can reflect the scene based on positioning data, motion data, biometric data, environmental data, and time data. Characteristics. Therefore, the behavior data collected by the sensor can be used to identify the behavior scene of the wearable electronic device.
  • the wearable electronic device can acquire the behavior data collected by the sensor used to collect behavior data when recognizing the behavior scene.
  • the behavior data acquired by the mobile terminal may include behavior data collected by various sensors. For example, all behavior data collected by sensors that can collect behavior data can be acquired, or behavior data collected by some sensors can be acquired, which is not limited here. . It is understandable that the more types of behavior data acquired (that is, the more types of sensors that collect behavior data), the more features are used to identify behavior scenarios, and the higher the dimension of features, which can improve the recognition of behavior scenarios. Accuracy.
  • the wearable electronic device can obtain the behavior data collected by the sensor within a preset time period, so as to subsequently recognize the behavior scene. It is understandable that through the behavior data collected within a period of time, the accuracy of the subsequent recognized behavior scene can be made high.
  • Step S120 Perform feature extraction on the behavior data to obtain behavior features.
  • the wearable electronic device after the wearable electronic device acquires the behavior data collected by the sensor for collecting behavior data, it can perform feature extraction on the behavior data to obtain behavior characteristics, so as to subsequently perform behavior scenarios based on the behavior characteristics.
  • the process of identification after the wearable electronic device acquires the behavior data collected by the sensor for collecting behavior data, it can perform feature extraction on the behavior data to obtain behavior characteristics, so as to subsequently perform behavior scenarios based on the behavior characteristics. The process of identification.
  • the wearable electronic device after the wearable electronic device acquires the behavior data collected by the sensor, it can perform feature extraction on the acquired behavior data.
  • the extracted features can include time series features, frequency domain features, statistical features, etc., and the specific extracted features Can not be used as a limit.
  • the behavior data collected by the sensor acquired by the wearable electronic device may be time series data (real-time domain data), and the mobile terminal may perform statistical feature extraction, and obtain the median and average value from the behavior data detected by the sensor. , Maximum value, minimum value, peak value, etc., so as to obtain the statistical characteristics in the time series data; the mobile terminal can also obtain the value of a certain point on the time axis but before and before and after the preset time interval according to the time series data, so as to obtain the time series
  • the characteristics are not limited here.
  • the mobile terminal may also perform a fast Fourier transform on the time series data to obtain frequency domain data, and then separate the high and low frequency signals, calculate the overall capability of the frequency domain signal, and take at least some coefficients as frequency domain features. For example, after the fast Fourier transform is performed on the acceleration curve with time, the acceleration and frequency curve can be obtained, and the frequency domain characteristics can be calculated according to the obtained acceleration and frequency curve curve.
  • the specific method of extracting the behavior feature corresponding to the behavior data may not be a limitation.
  • Step S130 Input the behavior feature into a pre-trained preset model to obtain the current behavior scene in which the wearable electronic device is located.
  • the preset model is pre-trained to output the behavioral scene according to the input behavior feature. Recognition results.
  • the obtained behavior characteristics may be input into a trained preset model to obtain the recognition result of the behavior scene output by the preset model according to the input behavior characteristics, and identify The result includes the behavior scene, and the behavior scene obtained by the recognition is used as the current behavior scene.
  • the preset model may be trained from a large number of training samples.
  • the training samples used to train the preset model may include input samples and output samples.
  • the input samples may include behavior features, and the output samples may include behaviors corresponding to the behavior features. Scenes. Therefore, the trained preset model can be used to output the recognition result according to the input behavior characteristics, and the recognition result may not include the current behavior scene of the wearable electronic device.
  • the recognition result output by the trained preset model may be the scene identifier of the behavior scene, where different behavior scenes can be stored in the form of different scene identifiers, for example, through 1 word The integer number of the section (ie 0-255) identifies different behavior scenarios.
  • the preset model may include Support Vector Machine (SVM), neural network, decision tree model, etc., which are not limited here.
  • SVM Support Vector Machine
  • neural network neural network
  • decision tree model decision tree model
  • Step S140 Switch the current interface displayed by the wearable electronic device to the user interface corresponding to the current behavior scene.
  • the wearable electronic device after the wearable electronic device recognizes the behavior scene in which the wearable electronic device is located, it can determine the user interface corresponding to the behavior scene according to the behavior scene in which the wearable electronic device is located, so that the wearable electronic device The current interface of the electronic device is switched to the user interface corresponding to the current behavior scene to realize automatic switching of the user interface, and the switched user interface corresponds to the current behavior scene.
  • the wearable electronic device may pre-store user interfaces corresponding to different behavior scenarios, and the wearable electronic device can determine the user interface to be switched to according to the recognized behavior scenarios. For example, when the recognition result includes the scene identifier of the behavior scene, the user interface corresponding to the scene identifier can be read, and the current interface of the wearable electronic device can be switched to the determined user interface.
  • the user interface may include a dial interface, a home screen interface, a lock screen interface, or an application interface.
  • the wearable electronic device can automatically switch any user interface among the dial interface, the main screen interface, the lock screen interface, and the application interface, reducing user operations, and the switched interface corresponds to the current behavior scene, which satisfies The needs of users.
  • the user interface may be a dial interface
  • behavior scenarios may include sports scenes, home scenes, and work scenes. If the current behavior scene is a sports scene, you can switch the current dial to the dial corresponding to the sports scene, as shown in Figure 2, after the wearable electronic device switches the dial to the dial corresponding to the sports scene, the dial corresponding to the sports scene can be displayed Sports-related content such as time, steps, calories burned, heart rate, etc.; if the current behavior scene is a work scene, the current dial can be switched to the dial corresponding to the work scene, as shown in Figure 3, the wearable electronic device will switch the dial After setting the dial corresponding to the work scene, the dial corresponding to the work scene can display work-related content such as date, time, and work schedule; if the current home scene is a family scene, you can switch the current dial to the dial corresponding to the family scene.
  • work-related content such as date, time, and work schedule
  • the dial corresponding to the work scene can display family-related content such as date, time, weather, and TV program reminders.
  • family-related content such as date, time, weather, and TV program reminders.
  • the behavior data is extracted to obtain the behavior characteristics, and the behavior characteristics are input into the trained preset model to obtain the wearable electronic device.
  • the current behavior scene where the device is located, and then the current interface displayed by the wearable electronic device is switched to the user interface corresponding to the current behavior scene, so as to realize the automatic identification of the wearable electronic device according to the behavior characteristics corresponding to the behavior data collected by the sensor And switch the displayed current interface to the user interface corresponding to the current behavior scene to meet the user's needs for the user interface, reduce the user's operation in switching the user interface, and improve the user experience.
  • FIG. 5 shows a schematic flowchart of an interface switching method provided by another embodiment of the present application.
  • This interface switching method can be applied to the above-mentioned wearable electronic device.
  • the wearable electronic device includes a sensor for collecting behavioral data.
  • the process shown in FIG. 5 will be described in detail below.
  • the interface switching method may specifically include the following steps :
  • Step S210 Obtain the behavior data collected by the sensor.
  • Step S220 Perform feature extraction on the behavior data to obtain behavior features.
  • step S210 and step S220 can refer to the content of the foregoing embodiment, which will not be repeated here.
  • Step S230 Perform preprocessing on the behavior feature to obtain the preprocessed behavior feature.
  • the wearable electronic device may also perform preprocessing on the behavior characteristics before inputting the behavior characteristics into the trained preset model.
  • preprocessing the behavior feature may include: performing feature cleaning and feature mining on the behavior feature.
  • feature cleaning includes removing content in behavior features according to preset cleaning rules;
  • feature mining includes mining behavior features to form more dimensional features.
  • performing feature cleaning on the behavior feature by the mobile terminal may include: removing missing values and abnormal values in the behavior feature, for example, removing incomplete data, wrong type data, and the like.
  • feature cleaning can be missing value processing. Dimensions with missing values less than a preset percentage can be fitted to missing values based on other values of the dimension. If the number of missing values is greater than the preset percentage, it means that the feature is invalid. Feature, remove the dimension. Among them, the preset percentage may be 35%, 40%, etc., and the specific preset percentage may not be a limitation.
  • performing feature mining on behavior features may include: using a boosted tree model to mine behavior features after feature cleaning.
  • the mobile terminal Before inputting the behavior feature into the boosting tree model, the mobile terminal can also quantify the numerical feature in the behavior feature, and quantize the behavior feature into a vector after quantizing the behavior feature. Then the quantized vector is input to the boosting tree model, and the boosting tree model outputs multi-dimensional feature vectors according to the input vector, so as to obtain the preprocessed behavior characteristics.
  • Step S240 Input the pre-processed behavior characteristics into a pre-trained preset model to obtain the current behavior scene in which the wearable electronic device is located.
  • the preset model is pre-trained to determine the behavior according to the input behavior characteristics. Output the recognition result of the behavior scene.
  • Step S250 Switch the current interface displayed by the wearable electronic device to a user interface corresponding to the current behavior scene.
  • step S240 and step S250 can refer to the content of the foregoing embodiment, which will not be repeated here.
  • Step S260 When a switching operation is detected, the user interface displayed by the wearable electronic device is switched to a target user interface corresponding to the switching operation.
  • the wearable electronic device can also detect When the switching operation of the user interface displayed by the electronic device is detected, the current interface displayed by the wearable electronic device can be switched to the target user interface corresponding to the switching operation, so as to meet the needs of the user.
  • the switching operation on the user interface may be a shaking operation on the wearable electronic device.
  • detecting the switching operation may include: acquiring the shaking trajectory of the wearable electronic device; if the shaking trajectory satisfies a preset trajectory condition, determining that the switching operation is detected. It is understandable that wearable electronic devices are usually worn on the user, and when the displayed user interface is automatically switched to the user interface corresponding to the behavior scene, if the user is not satisfied with the automatically switched user interface, the user can set According to the rules, the user interface can be switched by shaking the wearable electronic device, and it is also convenient for the user to switch the user interface.
  • the wearable electronic device may determine the shaking trajectory of the wearable electronic device according to the angular velocity value detected by the gyroscope sensor and the like.
  • the preset trajectory condition may be a preset judgment condition for determining whether to switch the user interface, and the preset trajectory condition may be that the acquired shaking trajectory is any one of a plurality of shaking trajectories.
  • step S260 may include: acquiring a target user interface corresponding to the shaking track; switching the user interface displayed by the wearable electronic device to the target user interface.
  • the wearable electronic device can pre-store the corresponding relationship between the shaking trajectory and the user interface. Different shaking trajectories can correspond to different user interfaces.
  • the wearable electronic device can determine that the shaking trajectory meets the preset trajectory condition according to the correspondence. Relationship, determine the user interface corresponding to the shaking trajectory. Therefore, the user can switch the displayed user interface to the required user interface by performing shaking operations on the smart wearable device with different shaking trajectories, which facilitates the switching of the interface and improves the user experience.
  • Step S270 If the switching operation is detected within a preset period of time, and the target user interface corresponds to the target behavior scene, the behavior characteristics of the target behavior scene are marked, the preset model is input, and the target user interface corresponds to the target behavior scene.
  • the preset model is used for correction training.
  • the wearable electronic device may also determine whether the switching operation is an operation detected within a preset time period.
  • the time length can be a relatively short time length, for example, it can be 1 minute to 5 minutes. If it is an operation that is monitored within the preset time period, it means that the user interface was incorrectly switched based on the identified behavior scene before, and it also means that the previous behavior scene was wrong. Therefore, the target behavior scene corresponding to the target user interface can be determined, and then the behavior characteristics (recognized behavior characteristics) labeled with the target behavior scene can be input into the preset model, and the preset model can be calibrated and trained. That is, the behavior feature is used as the input sample, and the target behavior scene is used as the output sample, and the preset model is trained to achieve the purpose of correcting the preset model and make the output result of the preset model more accurate.
  • the interface switching method extracts the behavior characteristics by acquiring the collected behavior data, and then preprocesses the behavior characteristics to obtain the preprocessed behavior characteristics, and then input the preprocessed behavior characteristics into the trained
  • the preset model obtains the current behavior scene where the wearable electronic device is located, and then switches the displayed current interface to the user interface corresponding to the current behavior scene, and when the switching operation is detected, the displayed interface is switched to correspond to the switching operation
  • the target user interface of the target user interface, and the preset model is corrected when the switching operation is detected within the preset time. In this way, it is possible to improve the accuracy of recognition by preprocessing the behavior features. In addition, correcting the preset model can also improve the accuracy of the recognition of the behavior scene.
  • FIG. 6 shows a schematic flowchart of an interface switching method provided by another embodiment of the present application.
  • This interface switching method can be applied to the above-mentioned wearable electronic device.
  • the wearable electronic device includes a sensor for collecting behavioral data.
  • the process shown in FIG. 6 will be described in detail below.
  • the interface switching method may specifically include the following steps :
  • Step S310 Obtain the corresponding relationship between the behavior scene and the user interface.
  • the wearable electronic device may obtain the corresponding relationship between the behavior scene and the user interface in advance.
  • different behavior scenes may correspond to different user interfaces, or they may be multiple users.
  • the scenes correspond to the same user interface, and the specific correspondence relationship may not be limited.
  • step S310 may include:
  • Step S311 Display a setting interface, where the setting interface is used to set the corresponding relationship between the behavior scene and the user interface.
  • displaying the setting interface may include: acquiring a user interface of multiple styles currently existing in the wearable electronic device, and multiple behavior scenarios that can be recognized by the preset model; A setting interface for an option and a plurality of second options, the first option corresponds to the user interface of the multiple styles one-to-one, and the second option corresponds to the multiple behavior scenarios one-to-one.
  • the wearable electronic device can pre-store a variety of styles of user interface, and store a variety of behavior scenarios that can be recognized by the preset model, and display according to the multiple styles of user interface and multiple behavior scenarios.
  • the setting interface includes a plurality of first options and a plurality of second options, so that the user can associate the behavior scene with the user interface through the first option and the second option.
  • Step S312 According to the setting operation detected in the setting interface, the corresponding relationship between the behavior scene and the user interface is set, and the corresponding relationship is stored.
  • the wearable electronic device can detect the setting operation in the setting interface, and according to the setting operation, after setting the corresponding relationship between the behavior scene and the user interface, the corresponding relationship is stored, so that the wearable electronic device can perform the user When the interface is switched, the corresponding relationship is used to determine the user interface.
  • the wearable electronic device may also display a first setting interface first, and the first setting interface includes a scene option for selecting a behavior scene, and after detecting a touch operation on the scene option corresponding to the behavior scene, The second setting interface is displayed.
  • the second setting interface includes interface options for selecting the user interface. After detecting a touch operation on the interface option, the behavior scene corresponding to the operated scene option is corresponding to the operated interface option The user interface is associated to obtain the corresponding relationship between the behavior scene and the user interface.
  • step S310 may also include: receiving a corresponding relationship between the behavior scene and the user interface sent by the server, and the corresponding relationship is that the mobile terminal generates the behavior scene and the user interface based on the detected association operation between the behavior scene and the user interface.
  • the corresponding relationship is then sent to the server, where the mobile terminal is associated with the wearable electronic device. It is understandable that in this way, the user can associate the behavior scene with the user interface through the mobile phone, tablet, etc. associated with the wearable electronic device, and generate the corresponding relationship between the behavior scene and the user interface.
  • the screen is small, and this implementation mode can facilitate the user to associate the behavior scene with the user interface.
  • Step S320 Obtain the behavior data collected by the sensor.
  • Step S330 Perform feature extraction on the behavior data to obtain behavior features.
  • Step S340 Input the behavior feature into a trained preset model to obtain the current behavior scene in which the wearable electronic device is located.
  • the preset model is pre-trained to output the behavioral scene according to the input behavior feature. Recognition results.
  • steps S320 to S340 can refer to the content of the foregoing embodiment, and details are not described herein again.
  • the wearable electronic device can also add behavior scenarios based on user operations. Therefore, the interface switching method may further include: acquiring scene addition data, the scene addition data including a new behavior scene and its corresponding behavior characteristics; according to the new behavior scene and its corresponding behavior characteristics, comparing the preset model Update. It is understandable that the wearable electronic device can acquire a new behavior scene and the behavior characteristics corresponding to the behavior scene, and then input the behavior characteristics marked with the behavior scene into the preset model for training, so as to realize the training of the preset model. Update.
  • the interface switching method may further include: according to the detected update operation, combining the new behavior scene with the user interface. After any one of the user interfaces of the multiple styles is associated, the corresponding relationship is updated. It is understandable that by updating the corresponding relationship between the behavior scene and the user interface, the wearable electronic device can switch the user interface to the user interface corresponding to the new behavior scene when the wearable electronic device recognizes that the behavior scene is the new behavior scene. .
  • Step S350 Switch the current interface displayed by the wearable electronic device to the user interface corresponding to the current behavior scene.
  • the wearable electronic device may determine the user interface corresponding to the current behavior scene according to the corresponding relationship obtained in step S310, and switch the displayed current interface to the user interface corresponding to the current behavior scene.
  • the interface switching method obtained by the embodiments of the present application obtains the corresponding relationship between the behavior scene and the user interface in advance, then obtains the behavior data, extracts the behavior characteristics, and then inputs the behavior characteristics into the trained preset model to obtain the current
  • the user interface corresponding to the current behavior scene is determined according to the corresponding relationship between the behavior scene and the user interface, and the displayed current interface is switched to the user interface corresponding to the current behavior scene to realize automatic switching of the user interface.
  • the corresponding relationship between the behavior scene and the user interface can be freely set by the user to meet the needs of different users.
  • FIG. 8 shows a schematic flowchart of an interface switching method provided by still another embodiment of the present application.
  • This interface switching method can be applied to the above-mentioned wearable electronic device.
  • the wearable electronic device includes a sensor for collecting behavior data.
  • the process shown in FIG. 8 will be described in detail below.
  • the interface switching method may specifically include the following steps :
  • Step S410 Obtain a training data set, where the training data set includes behavior features of samples marked with behavior scenes.
  • the embodiments of the present application also include a training method for the preset model. It is worth noting that the training of the preset model may be based on the obtained training The data collection is performed in advance, and each subsequent behavioral scene recognition can be performed using a preset model, instead of training the model each time the behavioral scene is recognized.
  • behavior data in different behavior scenarios may be collected, and behavior characteristics corresponding to the behavior data may be extracted as sample behavior characteristics, and the sample behavior characteristics may be marked on the behavior scene. In this way, sample behavior characteristics corresponding to multiple behavior scenarios can be obtained.
  • the sample behavior feature is the input sample used for training
  • the labeled behavior scene is the output sample used for training.
  • Each set of training data may include one input sample and one output sample.
  • Step S420 Input the training data set into a neural network, train the neural network to obtain the trained preset model, and the preset model can determine the behavior scene corresponding to the behavior feature according to the behavior feature.
  • the training data set may be input to the neural network for training according to the training data set, so as to obtain the preset model.
  • the neural network may be a deep neural network, which is not limited here.
  • the following describes the training of the initial model based on the training data set.
  • the behavior characteristics of samples in a set of data in the training data set are used as input samples of the neural network, and the behavior scenes marked in the set of data can be used as output samples of the neural network.
  • the neurons in the input layer are fully connected with the neurons in the hidden layer
  • the neurons in the hidden layer are fully connected with the neurons in the output layer, which can effectively extract potential features of different granularities.
  • the number of hidden layers can be multiple, so as to better fit the nonlinear relationship, and make the preset model obtained by training more accurate. It is understandable that the training process of the preset model may be completed by the wearable electronic device, or may not be completed by the wearable electronic device.
  • the wearable electronic device can only serve as a direct user or an indirect user, that is, the wearable electronic device can send the acquired behavior characteristics to the server storing the preset model , Obtain the recognition result of the behavior scene from the server.
  • the preset model obtained by training may be stored locally in the wearable electronic device, and the preset model obtained by training may also be stored in a server communicatively connected with the wearable electronic device. , Which can reduce the storage space occupied by wearable electronic devices and improve the operating efficiency of wearable electronic devices.
  • the preset model may periodically or irregularly obtain new training data, and train and update the preset model.
  • Step S430 Obtain the behavior data collected by the sensor.
  • Step S440 Perform feature extraction on the behavior data to obtain behavior features.
  • Step S450 Input the behavior feature into a pre-trained preset model to obtain the current behavior scene in which the wearable electronic device is located.
  • the preset model is pre-trained to output the behavioral scene according to the input behavior feature. Recognition results.
  • Step S460 Switch the current interface displayed by the wearable electronic device to the user interface corresponding to the current behavior scene.
  • the interface switching method may further include: determining whether the current behavior scene exists in the wearable electronic device Corresponding user interface; if there is a user interface corresponding to the current behavior scene, switch the current interface displayed by the wearable electronic device to the user interface corresponding to the current behavior scene; if there is no user interface corresponding to the current behavior scene User interface, the use frequency of each user interface among all user interfaces existing in the wearable electronic device is acquired; according to the use frequency of each user interface, the current interface displayed by the wearable electronic device is switched to The most frequently used user interface.
  • the wearable electronic device does not store the user interface corresponding to the identified current behavior scene, the wearable electronic device cannot switch the displayed current interface to the user interface corresponding to the current behavior scene at this time.
  • the current interface displayed by the wearable electronic device can be switched to the user interface with the highest frequency of use, so that the switched user interface can meet the needs of the user as much as possible.
  • the interface switching method provided by the embodiment of the application provides a training method of a preset model.
  • the initial model is trained through the behavior characteristics of the sample labeled with the behavior scene, so as to obtain the trained preset model.
  • the preset model can be used According to the behavior characteristics corresponding to the collected behavior data, the recognition result of the behavior scene is output.
  • the wearable electronic device obtains the behavior data and extracts the behavior characteristics, and then inputs the behavior characteristics into the trained preset model to obtain the current behavior scene, and switches the displayed current interface to the user interface corresponding to the current behavior scene. Realize the automatic switching of the user interface, reduce the user's operation, and improve the user experience.
  • FIG. 9 shows a structural block diagram of an interface switching apparatus 400 provided by an embodiment of the present application.
  • the interface switching device 400 is applied to the above-mentioned wearable electronic device, and the wearable electronic device includes a sensor for collecting behavior data.
  • the interface switching device 400 includes: a data acquisition module 410, a feature acquisition module 420, a scene recognition module 430, and an interface switching module 440.
  • the data acquisition module 410 is used to acquire the behavior data collected by the sensor; the feature acquisition module 420 is used to perform feature extraction on the behavior data to obtain behavior characteristics; the scene recognition module 430 is used to obtain the behavior data
  • the behavior feature is input into a trained preset model to obtain the current behavior scene in which the wearable electronic device is located, and the preset model is pre-trained to output the recognition result of the behavior scene according to the input behavior feature;
  • the interface switching module 440 is configured to switch the current interface displayed by the wearable electronic device to a user interface corresponding to the current behavior scene.
  • the scene recognition module 430 includes a feature processing unit and a feature input unit.
  • the feature processing unit is used to preprocess the behavior feature to obtain the preprocessed behavior feature;
  • the feature input unit is used to input the preprocessed behavior feature into a trained preset model.
  • the feature processing unit may be specifically used to: perform feature cleaning and feature mining on the behavior feature.
  • the interface switching module 440 may also be used to switch the current interface displayed by the wearable electronic device to the user interface corresponding to the current behavior scene, and when a switching operation is detected, set the available The user interface displayed by the wearable electronic device is switched to the target user interface corresponding to the switching operation.
  • the interface switching device 400 may further include: a model correction module.
  • the model correction module is configured to, if the switching operation is detected within a preset period of time and the target user interface corresponds to the target behavior scene, it will be marked with the behavior characteristics of the target behavior scene and input the preset model, Perform correction training on the preset model.
  • the interface switching device 400 may further include: a trajectory acquisition module and an operation acquisition module.
  • the trajectory acquisition module is used to acquire the shaking trajectory of the wearable electronic device; the operation acquisition module is used to determine that a switching operation is detected if the shaking trajectory meets a preset trajectory condition.
  • the interface switching module 440 switches the user interface displayed by the wearable electronic device to the target user interface corresponding to the switching operation, including: acquiring the target user interface corresponding to the shaking trajectory; The displayed user interface is switched to the target user interface.
  • the interface switching device 400 may further include: a corresponding relationship acquisition module.
  • the corresponding relationship acquisition module is configured to acquire the corresponding relationship between the behavior scene and the user interface before the current interface displayed by the wearable electronic device is switched to the user interface corresponding to the current behavior scene.
  • the interface switching module 440 may include: an interface determining unit, configured to determine a user interface corresponding to the current behavior scene according to the corresponding relationship; a switching execution unit, configured to display the wearable electronic device The current interface of is switched to the user interface corresponding to the current behavior scene.
  • the corresponding relationship acquisition module may include: an interface display unit, configured to display a setting interface, the setting interface configured to set the corresponding relationship between the behavior scene and the user interface; the corresponding relationship setting unit, configured based on the Set the detected setting operation in the setting interface, set the corresponding relationship between the behavior scene and the user interface, and store the corresponding relationship.
  • the interface display unit may be specifically configured to: obtain a user interface of multiple styles currently existing in the wearable electronic device, and multiple behavior scenarios recognizable by the preset model; the display includes multiple first options And a setting interface of a plurality of second options, the first option corresponds to the user interface of the multiple styles one-to-one, and the second option corresponds to the multiple behavior scenarios one-to-one.
  • the interface switching device 400 may further include: a data acquisition module and a model update module.
  • the data acquisition module is used to acquire scene addition data, the scene addition data includes a new behavior scene and its corresponding behavior characteristics; the model update module is used to compare the preset model according to the new behavior scene and its corresponding behavior characteristics Update.
  • the interface switching device 400 may further include: a correspondence update module, configured to update the preset model according to the new behavior scene and its corresponding behavior characteristics, and then according to the detected update operation After associating the new behavior scene with any one of the user interfaces of the multiple styles, the corresponding relationship is updated.
  • a correspondence update module configured to update the preset model according to the new behavior scene and its corresponding behavior characteristics, and then according to the detected update operation After associating the new behavior scene with any one of the user interfaces of the multiple styles, the corresponding relationship is updated.
  • the corresponding relationship acquisition module may be specifically configured to: receive the corresponding relationship between the behavior scene and the user interface sent by the server, and the corresponding relationship is that the mobile terminal generates the corresponding relationship between the behavior scene and the user interface based on the detected association operation between the behavior scene and the user interface. The corresponding relationship is then sent to the server, where the mobile terminal is associated with the wearable electronic device.
  • the interface switching device 400 may further include: a training data acquisition module for acquiring a training data set, the training data set including sample behavior features marked with behavior scenes; and a model training module for integrating
  • the training data set is input to a neural network, and the neural network is trained to obtain the trained preset model, and the preset model can determine the behavior scene corresponding to the behavior feature according to the behavior feature.
  • the interface switching module 440 may be specifically configured to: determine whether there is a user interface corresponding to the current behavior scene in the wearable electronic device; if there is a user interface corresponding to the current behavior scene, change the The current interface displayed by the wearable electronic device is switched to the user interface corresponding to the current behavior scene.
  • the interface switching module 440 may also be used to: if there is no user interface corresponding to the current behavior scene, obtain the usage frequency of each user interface among all the user interfaces existing in the wearable electronic device; Describe the frequency of use of each user interface, and switch the current interface displayed by the wearable electronic device to the user interface with the highest frequency of use.
  • the behavior data includes at least one of positioning data, exercise data, biometric data, environmental data, and time data;
  • the user interface includes a dial interface, a home screen interface, a lock screen interface, or Application interface.
  • the coupling between the modules may be electrical, mechanical or other forms of coupling.
  • each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software function modules.
  • the preset model is pre-trained to output the recognition result of the behavior scene according to the input behavior characteristics, and then the current interface displayed by the wearable electronic device is switched to the user interface corresponding to the current behavior scene, so as to realize the collection according to the sensor
  • the behavior characteristics corresponding to the behavior data of the wearable electronic device automatically recognize the current behavior scene in which the wearable electronic device is located, and switch the displayed current interface to the user interface corresponding to the current behavior scene, reducing user operations in switching user interfaces and improving user experience.
  • the wearable electronic device 100 may be an electronic device capable of running application programs, such as a smart watch, a smart bracelet, or smart glasses.
  • the wearable electronic device 100 in this application may include one or more of the following components: a processor 110, a memory 120, and one or more application programs, where one or more application programs may be stored in the memory 120 and configured to Executed by one or more processors 110, one or more programs are configured to execute the methods described in the foregoing method embodiments.
  • the processor 110 may include one or more processing cores.
  • the processor 110 uses various interfaces and lines to connect various parts of the entire wearable electronic device 100, by running or executing instructions, programs, code sets, or instruction sets stored in the memory 120, and calling data stored in the memory 120 , Perform various functions of the wearable electronic device 100 and process data.
  • the processor 110 may use at least one of digital signal processing (Digital Signal Processing, DSP), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), and Programmable Logic Array (Programmable Logic Array, PLA).
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PLA Programmable Logic Array
  • the processor 110 may be integrated with one or a combination of a central processing unit (CPU), a graphics processing unit (GPU), a modem, and the like.
  • the CPU mainly processes the operating system, user interface, and application programs; the GPU is used for rendering and drawing of display content; the modem is used for processing wireless communication. It can be understood that the above-mentioned modem may not be integrated into the processor 110, but may be implemented by a communication chip alone.
  • the memory 120 may include random access memory (RAM) or read-only memory (Read-Only Memory).
  • the memory 120 may be used to store instructions, programs, codes, code sets or instruction sets.
  • the memory 120 may include a program storage area and a data storage area, where the program storage area may store instructions for implementing the operating system and instructions for implementing at least one function (such as touch function, sound playback function, image playback function, etc.) , Instructions used to implement the following various method embodiments, etc.
  • the data storage area can also store data (such as phone book, audio and video data, chat record data) created by the terminal 100 during use.
  • FIG. 11 shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
  • the computer-readable medium 800 stores program code, and the program code can be invoked by a processor to execute the method described in the foregoing method embodiment.
  • the computer-readable storage medium 800 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the computer-readable storage medium 800 includes a non-transitory computer-readable storage medium.
  • the computer-readable storage medium 800 has storage space for the program code 810 for executing any method steps in the above-mentioned methods. These program codes can be read from or written into one or more computer program products.
  • the program code 810 may be compressed in a suitable form, for example.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种界面切换方法、装置、可穿戴电子设备及存储介质,该界面切换方法应用于可穿戴电子设备,所述可穿戴电子设备包括用于采集行为数据的传感器,所述方法包括:获取所述传感器采集的行为数据;对所述行为数据进行特征提取,获得行为特征;将所述行为特征输入已训练的预设模型,获得所述可穿戴电子设备所处的当前行为场景,所述预设模型被预先训练,以根据输入的行为特征而输出行为场景的识别结果;将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面。本方法可以实现可穿戴电子设备的表盘的自动切换,提升用户体验。

Description

界面切换方法、装置、可穿戴电子设备及存储介质 技术领域
本申请涉及移动终端技术领域,更具体地,涉及一种界面切换方法、装置、可穿戴电子设备及存储介质。
背景技术
可穿戴电子设备,例如智能手表、智能眼镜和智能手环等,已经成为人们日常生活中最常用的消费型电子产品之一。可穿戴电子设备由于佩戴方便,并且可以为用户提供更为人性化的服务,目前已得到越来越多消费者的喜爱。但是传统的可穿戴电子设备在进行用户界面切换时,操作较为繁琐,给用户带来不便。
发明内容
鉴于上述问题,本申请提出了一种界面切换方法、装置、可穿戴电子设备及存储介质。
第一方面,本申请实施例提供了一种界面切换方法,应用于可穿戴电子设备,所述可穿戴电子设备包括用于采集行为数据的传感器,所述方法包括:获取所述传感器采集的行为数据;对所述行为数据进行特征提取,获得行为特征;将所述行为特征输入已训练的预设模型,获得所述可穿戴电子设备所处的当前行为场景,所述预设模型被预先训练,以根据输入的行为特征而输出行为场景的识别结果;将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面。
第二方面,本申请实施例提供了一种界面切换装置,应用于可穿戴电子设备,所述可穿戴电子设备包括用于采集行为数据的传感器,所述装置包括:数据获取模块、特征获取模块、场景识别模块以及界面切换模块,其中,所述数据获取模块用于获取所述传感器采集的行为数据;所述特征获取模块用于对所述行为数据进行特征提取,获得行为特征;所述场景识别模块用于将所述行为特征输入已训练的预设模型,获得所述可穿戴电子设备所处的当前行为场景,所述预设模型被预先训练,以根据输入的行为特征而输出行为场景的识别结果;所述界面切换模块用于将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面。
第三方面,本申请实施例提供了一种可穿戴电子设备,包括:一个或多个处理器;存储器;一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行上述第一方面提供的界面切换方法。
第四方面,本申请实施例提供了一种计算机可读取存储介质,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用执行上述第一方面提供的界面切换方法。
本申请提供的方案,通过获取可穿戴电子设备的传感器采集的行为数据,对行为数据进行特征提取,获得行为特征,将行为特征输入已训练的预设模型,获得可穿戴电子设备所处的当前行为场景,该预设模型被预先训练,以根据输入的行为特征而输出行为场景的识别结果,然后将所可穿戴电子设备显示的当前界面切换为当前行为场景对应的用户界面,从而实现根据传感器采集的行为数据对应的行为特征,自动识别可穿戴电子设备所处的当前行为场景,并将显示的当前界面切换为当前行为场景对应的用户界面,减少切换用户界面中用户的操作,提升用户体验。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了根据本申请一个实施例的界面切换方法流程图。
图2示出了本申请一个实施例提供的一种界面示意图。
图3示出了本申请一个实施例提供的另一种界面示意图。
图4示出了本申请一个实施例提供的又一种界面示意图。
图5示出了根据本申请另一个实施例的界面切换方法流程图。
图6示出了根据本申请又一个实施例的界面切换方法流程图。
图7示出了本申请又一个实施例提供的界面切换方法中步骤S310的流程图。
图8示出了根据本申请再一个实施例的界面切换方法流程图。
图9示出了根据本申请一个实施例的界面切换装置的一种框图。
图10是本申请实施例的用于执行根据本申请实施例的界面切换方法的可穿戴电子设备的框图。
图11是本申请实施例的用于保存或者携带实现根据本申请实施例的界面切换方法的程序代码的存储单元。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
随着科技和生活水平的发展,可穿戴电子设备(例如智能手表等)越来越频繁地出现在日常生活中。可穿戴电子设备可以不局限于时间的显示,还可以显示一些信息,例如天气信息、健康信息、提示信息等。因此,可穿戴电子设备的用户界面(例如表盘等)的样式有差异性,并且不同用户对用户界面的样式的需求不同,这就使得一个样式的用户界面不能满足用户的需求,从而可穿戴电子设备中通常会存在多种样式的用户界面。
发明人经过长期研究发现,在传统的技术中,可穿戴电子设备中存在多种样式的用户界面时,通常在进行用户界面的切换时,需要用户进行繁琐的操作。例如,在切换表盘时,需要进入表盘的设置界面,然后对当前存在的表盘进行查看,再选择需求表盘样式,整个过程花费时间较多。
针对上述问题,发明人提出了本申请实施例提供的界面切换方法、装置、可穿戴电子设备以及存储介质,根据传感器采集的行为数据对应的行为特征,自动识别可穿戴电子设备所处的当前行为场景,并将显示的当前界面切换为当前行为场景对应的用户界面,减少切换用户界面中用户的操作,提升用户体验。其中,具体的界面切换方法在后续的实施例中进行详细的说明。
请参阅图1,图1示出了本申请一个实施例提供的界面切换方法的流程示意图。所述界面切换方法用于根据传感器采集的行为数据对应的行为特征,自动识别可穿戴电子设备所处的当前行为场景,并将显示的当前界面切换为当前行为场景对应的用户界面,减少切换用户界面中用户的操作,提升用户体验。在具体的实施例中,所述界面切换方法应用于如图9所示的界面切换装置400以及配置有所述界面切换装置400的可穿戴电子设备100(图10)。其中,可穿戴电子设备可以包括用于采集行为数据的传感器,传感器可以包括加速度传感器、陀螺仪传感器、重力传感器、心率传感器、脑电波传感器、定位传感器、红外传感器等,在此不做限定。下面将以电子设备为例,说明本实施例的具体流程,当然,可以理解的,本实施例所应用的可穿戴电子设备可 以为智能手表、智能手环等,在此不做限定。下面将针对图1所示的流程进行详细的阐述,所述界面切换方法具体可以包括以下步骤:
步骤S110:获取所述传感器采集的行为数据。
在本申请实施例中,移动终端中可以设置有用于采集行为数据的多种传感器,例如加速度传感器、陀螺仪传感器、重力传感器、心率传感器、脑电波传感器、定位传感器、红外传感器等。其中,行为数据可以指用于表征用户行为的数据。用户行为可以包括走动、静止、跑动、下蹲、手部的运动、头部晃动等不同类型的用户行为,具体的用户行为可以不做限定。不同的用户行为下,这些传感器采集的行为数据可以不同。并且行为数据可以反应一些行为场景的特征,例如,用户在从往家庭位置的路线上走路时的场景,可以根据定位数据、运动数据、生物特征数据、环境数据以及时间数据则可以反映出该场景的特征。因此可以利用传感器采集的行为数据,识别可穿戴电子设备所处的行为场景。
在一些实施方式中,可穿戴电子设备在进行行为场景的识别时,可以获取用于采集行为数据的传感器采集的行为数据。其中,移动终端获取的行为数据可以包括多种传感器采集的行为数据,例如,可以获取全部的可以采集行为数据的传感器采集的行为数据,也可以获取部分传感器采集的行为数据,在此不做限定。可以理解的,获取的行为数据的种类越多(即采集行为数据的传感器的种类越多),用于识别行为场景的特征就越多,特征的维度也越高,可以提升行为场景的识别的准确率。
在一些实施方式中,可穿戴电子设备可以获取预设时间段内传感器采集的行为数据,以便后续对行为场景进行识别。可以理解的,通过一段时间内采集到的行为数据,可以使后续识别的行为场景的准确性高。
步骤S120:对所述行为数据进行特征提取,获得行为特征。
在本申请实施例中,可穿戴电子设备在获取到用于采集行为数据的传感器采集的行为数据后,则可以对行为数据进行特征提取,以获得行为特征,以便后续根据行为特征进行行为场景的识别的过程。
在一些实施方式中,可穿戴电子设备在获取到传感器采集的行为数据后,可以对获取的行为数据进行特征提取,提取的特征可以包括时序特征、频域特征以及统计特征等,具体提取的特征可以不作为限定。
作为一种实施方式,可穿戴电子设备获取的传感器采集的行为数据可以为时序数据(即时域数据),移动终端可以进行统计特征的提取,从传感器检测的行为数据中,获取中位数、均值、最大值、最小值、峰值等,从而得到时序数据中的统计特征;移动终端还可以根据时序数据,获取时间轴上某一点的但前之以及预设时间间隔前后的值等,从而得到时序特征,在此不做限定。
作为另一种实施方式,移动终端还可以对时序数据进行快速傅里叶变换,从而得到频域数据,然后分离高低频信号,计算频域信号总体能力并取至少部分系数作为频域特征。例如,在对加速度随时间变化的曲线进行快速傅里叶变换后,可以获得加速度与频率的变化曲线,根据获得的加速度与频率曲线的变化曲线,可以计算频域特征。当然,具体提取行为数据对应的行为特征的方式可以不作为限定。
步骤S130:将所述行为特征输入已训练的预设模型,获得所述可穿戴电子设备所处的当前行为场景,所述预设模型被预先训练,以根据输入的行为特征而输出行为场景的识别结果。
在一些实施方式中,在根据行为数据获得行为特征之后,可以将获得的行为特征输入到已训练的预设模型中,以获得预设模型根据输入的行为特征输出的行为场景的识别结果,识别结果包括行为场景,并且将识别获得的行为场景作为当前行为场景。
在一些实施方式中,预设模型可以由大量训练样本训练得到,用于训练预设模型的训练样本可以包括输入样本以及输出样本,输入样本可以包括行为特征,输出样本 可以包括行为特征对应的行为场景。从而已训练的预设模型可以用于根据输入的行为特征,输出识别结果,识别结果可以不包括可穿戴电子设备当前所处的行为场景。在一种可选的实施方式中,已训练的预设模型输出的识别结果,可以为行为场景的场景标识,其中不同行为场景可以以不同的场景标识的形式进行存储,例如,可以通过1字节的整型数(即0~255)对不同的行为场景进行标识。
在一些实施方式中,其中,预设模型可以包括支持向量机(Support Vector Machine,SVM)、神经网络、决策树模型等,在此不做限定。
步骤S140:将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面。
在一些实施方式中,可穿戴电子设备在识别得到可穿戴电子设备所处的行为场景之后,则可以根据可穿戴电子设备所处的行为场景,确定该行为场景对应的用户界面,以便将可穿戴电子设备的当前界面切换为当前行为场景对应的用户界面,实现对用户界面的自动切换,并且切换后的用户界面与当前行为场景对应。
在一些实施方式中,可穿戴电子设备中可以预先存储有不同行为场景对应的用户界面,可穿戴电子设备根据识别得到的行为场景,即可确定出需切换为的用户界面。例如,当识别结果包括行为场景的场景标识时,则可以读取该场景标识对应的用户界面,并将可穿戴电子设备的当前界面切换为确定出的用户界面。
在一些实施方式中,用户界面可以包括表盘界面、主屏幕界面、锁屏界面、或者应用界面。也就是说,可穿戴电子设备可以自动对表盘界面、主屏幕界面、锁屏界面及应用界面中的任一用户界面进行切换,减少了用户操作,并且切换后的界面与当前行为场景对应,满足用户的需求。
在一些应用场景中,用户界面可以为表盘界面,行为场景可以包括运动场景、家庭场景以及工作场景。如果当前行为场景为运动场景时,可以将当前表盘切换为运动场景对应的表盘,如图2所示,可穿戴电子设备将表盘切换为运动场景对应的表盘后,运动场景对应的表盘中可以显示时间、步数、消耗热量、心率等与运动相关的内容;如果当前行为场景为工作场景时,可以将当前表盘切换为工作场景对应的表盘,如图3所示,可穿戴电子设备将表盘切换为工作场景对应的表盘后,工作场景对应的表盘中可以显示日期、时间、工作安排等与工作相关的内容;如果当前家庭场景为家庭场景时,可以将当前表盘切换为家庭场景对应的表盘,如图4所示,可穿戴电子设备将表盘切换为家庭场景对应的表盘后,工作场景对应的表盘中可以显示日期、时间、天气、电视节目提醒等与家庭相关的内容。当然,以上行为场景以及表盘的内容仅为举例,并不代表对实际的行为场景和表盘的限定。
通过本申请实施例提供的界面切换方法,通过获取可穿戴电子设备的传感器采集的行为数据,对行为数据进行特征提取,获得行为特征,将行为特征输入已训练的预设模型,获得可穿戴电子设备所处的当前行为场景,然后将所可穿戴电子设备显示的当前界面切换为当前行为场景对应的用户界面,从而实现根据传感器采集的行为数据对应的行为特征,自动识别可穿戴电子设备所处的当前行为场景,并将显示的当前界面切换为当前行为场景对应的用户界面,满足用户对用户界面的需求,并减少切换用户界面中用户的操作,提升用户体验。
请参阅图5,图5示出了本申请另一个实施例提供的界面切换方法的流程示意图。该界面切换方法可以应用于上述可穿戴电子设备,可穿戴电子设备包括用于采集行为数据的传感器,下面将针对图5所示的流程进行详细的阐述,所述界面切换方法具体可以包括以下步骤:
步骤S210:获取所述传感器采集的行为数据。
步骤S220:对所述行为数据进行特征提取,获得行为特征。
在本申请实施例中,步骤S210以及步骤S220可以参阅前述实施例的内容,在此 不再赘述。
步骤S230:对所述行为特征进行预处理,获得预处理后的行为特征。
在本申请实施例中,可穿戴电子设备在获得行为特征后,在将行为特征输入到已训练的预设模型之前,还可以对行为特征进行预处理。
在一些实施方式中,对所述行为特征进行预处理,可以包括:对所述行为特征进行特征清洗以及特征挖掘。其中,特征清洗包括根据预先设定的清洗规则对行为特征中的内容进行清除;特征挖掘包括对行为特征进行挖掘,以形成更多维度的特征。
在一些实施方式中,移动终端对行为特征进行特征清洗可以包括:对行为特征中的缺失值、异常值进行去除,例如,去除不完整的数据、类型错误的数据等。作为一种具体的实施方式,特征清洗可以为缺失值处理,缺失值小于预设百分比的维度可根据该维度的其它取值拟合缺失值,缺失值数量大于预设百分比则表示该特征为无效特征,去除该维度。其中,预设百分比可以为35%、40%等,具体的预设百分比可以不作为限定。
在一些实施方式中,对行为特征进行特征挖掘,可以包括:利用提升树模型对特征清洗后的行为特征进行挖掘。移动终端在将行为特征输入提升树模型之前,还可以将行为特征中的数值特征进行量化,在对行为特征进行量化后而量化为向量。然后将量化后的向量输入至提升树模型,提升树模型根据输入的向量,而输出多维度的特征向量,从而得到预处理后的行为特征。
步骤S240:将所述预处理后的行为特征输入已训练的预设模型,获得所述可穿戴电子设备所处的当前行为场景,所述预设模型被预先训练,以根据输入的行为特征而输出行为场景的识别结果。
步骤S250:将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面。
在本申请实施例中,步骤S240及步骤S250可以参阅前述实施例的内容,在此不再赘述。
步骤S260:当检测到切换操作时,将所述可穿戴电子设备显示的所述用户界面切换为切换操作对应的目标用户界面。
在本申请实施例中,在将可穿戴电子设备显示的当前界面切换为当前行为场景对应的用户界面之后,可能用户对自动切换的用户界面不满意,因此可穿戴电子设备还可以检测对可穿戴电子设备显示的用户界面的切换操作,当检测到切换操作时,可以将可穿戴电子设备显示的当前界面切换为切换操作对应的目标用户界面,以便满足用户的需求。
在一些实施方式中,对用户界面的切换操作,可以为对可穿戴电子设备的晃动操作。具体的,检测切换操作,可以包括:获取所述可穿戴电子设备的晃动轨迹;如果所述晃动轨迹满足预设轨迹条件,确定检测到切换操作。可以理解的,可穿戴电子设备通常被佩戴在用户身上,而当自动将显示的用户界面切换为行为场景对应的用户界面之后,如果用户不满意自动切换的用户界面,用户可以通过按照预先设定的规则,晃动可穿戴电子设备,即可实现对用户界面的切换,也同样的方便用户对用户界面进行切换。作为一种可选的实施方式,可穿戴电子设备可以根据陀螺仪传感器检测到的角速度值等,确定可穿戴电子设备的晃动轨迹。预设轨迹条件可以为预先设定的用于确定是否进行用户界面的切换的判断条件,预设轨迹条件可以为获取的晃动轨迹为多个晃动轨迹中的任意一个。
在该实施方式下,步骤S260可以包括:获取所述晃动轨迹对应的目标用户界面;将所述可穿戴电子设备显示的所述用户界面切换为所述目标用户界面。其中,可穿戴电子设备中可以预先存储有晃动轨迹与用户界面的对应关系,不同的晃动轨迹可以对应不同的用户界面,可穿戴电子设备可以在确定晃动轨迹满足预设轨迹条件时,根据 该对应关系,确定晃动轨迹对应的用户界面。从而用户可以通过对智能穿戴设备进行不同晃动轨迹的晃动操作,即可实现将显示的用户界面切换为需求的用户界面,较为方便的实现了界面的切换,提升了用户体验。
步骤S270:如果所述切换操作为预设时长内检测到的,并且目标用户界面与目标行为场景对应,将标注有所述目标行为场景的所述行为特征,输入所述预设模型,对所述预设模型进行较正训练。
在本申请实施例中,可穿戴电子设备在根据切换操作将显示的界面切换为对应的用户界面之后,可穿戴电子设备还可以确定该切换操作是否为预设时长内检测到的操作,该预设时长可以为较短的时间长度,例如可以为1分钟~5分钟。如果为预设时长内监测到的操作,则表示此前根据识别的行为场景,对用户界面进行切换有误,同样表示了此前的行为场景有误。因此,可以确定该目标用户界面对应的目标行为场景,然后将标注有目标行为场景的行为特征(识别到的行为特征),输入到预设模型,对预设模型进行校正训练。也就是将行为特征作为输入样本,将目标行为场景作为输出样本,对该预设模型进行训练,达到对预设模型的校正目的,使预设模型的输出结果的准确性更高。
本申请实施例提供的界面切换方法,通过获取采集的行为数据,提取得到行为特征,然后对行为特征进行预处理,获得预处理后的行为特征,再将预处理后的行为特征输入已训练的预设模型,获得可穿戴电子设备所处的当前行为场景,然后将显示的当前界面切换为当前行为场景对应的用户界面,并在检测到切换操作时,将显示的界面切换为与切换操作对应的目标用户界面,而且还在切换操作为预设时长内检测得到的时,对预设模型进行校正。从而实现通过对行为特征进行预处理,提升识别的准确率,另外,对预设模型进行校正,也可以提升行为场景的识别的准确率。
请参阅图6,图6示出了本申请又一个实施例提供的界面切换方法的流程示意图。该界面切换方法可以应用于上述可穿戴电子设备,可穿戴电子设备包括用于采集行为数据的传感器,下面将针对图6所示的流程进行详细的阐述,所述界面切换方法具体可以包括以下步骤:
步骤S310:获取行为场景与用户界面的对应关系。
在本申请实施例中,可穿戴电子设备可以预先获取行为场景与用户界面的对应关系,其中,不同行为场景对应的用户界面中,可以为不同行为场景对应不同用户界面,也可以为多种用户场景对应同一用户界面,具体的对应关系可以不做限定。
在一些实施方式中,请参阅图7,步骤S310可以包括:
步骤S311:显示设置界面,所述设置界面用于设置行为场景与用户界面的对应关系。
在一些实施方式中,显示设置界面,可以包括:获取所述可穿戴电子设备中当前存在的多种样式的用户界面,以及所述预设模型可识别的多种行为场景;显示包括多个第一选项以及多个第二选项的设置界面,所述第一选项与所述多种样式的用户界面一一对应,所述第二选项与所述多种行为场景一一对应。可以理解的,可穿戴电子设备中可以预先存储有多种样式的用户界面,并且存储有预设模型可识别的多种行为场景,根据多种样式的用户界面以及多种行为场景,即可显示包括多个第一选项和多个第二选项的设置界面,从而便于用户通过第一选项以及第二选项,将行为场景与用户界面关联。
步骤S312:根据于所述设置界面中检测到的设置操作,设置行为场景与用户界面的对应关系,并将所述对应关系进行存储。
在一些实施方式中,可穿戴电子设备可以检测设置界面中的设置操作,并根据设置操作,设置行为场景与用户界面的对应关系后,将该对应关系进行存储,以便可穿戴电子设备在进行用户界面的切换时,利用该对应关系进行用户界面的确定。
在一些实施方式中,可穿戴电子设备也可以先显示第一设置界面,第一设置界面包括用于选择行为场景的场景选项,在检测到对行为场景对应的场景选项的触控操作后,再展示第二设置界面,第二设置界面包括用于选择用户界面的界面选项,在检测到对界面选项的触控操作后,将被操作的场景选项对应的行为场景与被操作的界面选项对应的用户界面进行关联,从而获得该行为场景与该用户界面之间的对应关系。
在另一些实施方式中,步骤S310也可以包括:接收服务器发送的行为场景与用户界面的对应关系,所述对应关系为移动终端根据检测到的对行为场景与用户界面的关联操作,生成所述对应关系后发送至所述服务器,其中,所述移动终端与所述可穿戴电子设备关联。可以理解的,通过该方式,可以实现用户通过与可穿戴电子设备关联的手机、平板等,对行为场景与用户界面进行关联,并生成行为场景与用户界面的对应关系,由于可穿戴电子设备的屏幕较小,通过该实施方式可以方便用户对行为场景与用户界面进行关联。
需要说明的是,可穿戴电子设备在每次执行界面切换方法时,不必每次都获取行为场景与用户界面的对应关系。
步骤S320:获取所述传感器采集的行为数据。
步骤S330:对所述行为数据进行特征提取,获得行为特征。
步骤S340:将所述行为特征输入已训练的预设模型,获得所述可穿戴电子设备所处的当前行为场景,所述预设模型被预先训练,以根据输入的行为特征而输出行为场景的识别结果。
在本申请实施例中,步骤S320至步骤S340可以参阅前述实施例的内容,在此不再赘述。
在一些实施方式中,可穿戴电子设备还可以根据用户的操作,对行为场景进行增添。因此,该界面切换方法还可以包括:获取场景增添数据,所述场景增添数据包括新的行为场景及其对应的行为特征;根据新的行为场景及其对应的行为特征,对所述预设模型进行更新。可以理解的,可穿戴电子设备可以获取新的行为场景以及该行为场景对应的行为特征,然后将标注有该行为场景的该行为特征,输入到预设模型进行训练,以实现对预设模型的更新。
进一步的,在新增行为场景后,还可以对行为场景与用户界面的对应关系进行更新,因此,该界面切换方法还可以包括:根据检测到的更新操作,将所述新的行为场景与所述多种样式的用户界面中任一用户界面关联后,对所述对应关系进行更新。可以理解的,通过对行为场景与用户界面的对应关系进行更新,使得可穿戴电子设备在识别到行为场景为该新的行为场景时,可以将用户界面切换为该新的行为场景对应的用户界面。
步骤S350:将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面。
在本申请实施例中,可穿戴电子设备可以根据步骤S310中获得的对应关系,确定当前行为场景对应的用户界面,并将显示的当前界面切换为当前行为场景对应的用户界面。
本申请实施例提供的界面切换方法,通过预先获取行为场景与用户界面的对应关系,然后获取行为数据,并提取行为特征,再将行为特征输入到已训练的预设模型,获得所处的当前行为场景,根据行为场景与用户界面的对应关系,确定当前行为场景对应的用户界面,并将显示的当前界面切换为当前行为场景对应的用户界面,实现对用户界面的自动切换。并且行为场景与用户界面的对应关系可以由用户自由设置,满足不同用户的需求。
请参阅图8,图8示出了本申请再一个实施例提供的界面切换方法的流程示意图。该界面切换方法可以应用于上述可穿戴电子设备,可穿戴电子设备包括用于采集行为 数据的传感器,下面将针对图8所示的流程进行详细的阐述,所述界面切换方法具体可以包括以下步骤:
步骤S410:获取训练数据集合,所述训练数据集合包括被标注有行为场景的样本行为特征。
在本申请实施例中,针对前述实施例中的预设模型,本申请实施例中还包括对该预设模型的训练方法,值得说明的是,对预设模型的训练可以是根据获取的训练数据集合预先进行的,后续在每次进行行为场景的识别时,则可以利用预设模型进行,而无需每次进行行为场景的识别时,对模型进行训练。
在本申请实施例中,可以通过采集不同行为场景下的行为数据,并提取行为数据对应的行为特征,作为样本行为特征,并将样本行为特征标注上行为场景。按照该方式,可以获得多个行为场景对应的样本行为特征。
训练数据集合中,样本行为特征即为用于进行训练的输入样本,被标注的行为场景即为用于进行训练的输出样本,每组训练数据可以包括一个输入样本和一个输出样本。
步骤S420:将所述训练数据集合输入神经网络,对所述神经网络进行训练,获得已训练的所述预设模型,所述预设模型能够根据行为特征确定该行为特征对应的行为场景。
在本申请实施例中,可以根据训练数据集合,将训练数据集合输入至神经网络进行训练,从而得到预设模型。其中,神经网络可以为深度神经网络,在此不做限定。
下面对根据训练数据集合训练初始模型进行说明。
训练数据集合中一组数据中的样本行为特征作为神经网络的输入样本,一组数据中标注的行为场景可以作为神经网络的输出样本。在神经网络中,输入层中的神经元与隐藏层的神经元全连接,隐藏层的神经元与输出层的神经元全连接,从而能够有效提取不同粒度的潜在特征。并且隐藏层数目可以为多个,从而能更好地拟合非线性关系,使得训练得到的预设模型更加准确。可以理解的,对预设模型的训练过程可以由可穿戴电子设备完成,也可以不由可穿戴电子设备完成。当训练过程不由可穿戴电子设备完成时,则可穿戴电子设备可以只是作为直接使用者,也可以是间接使用者,即可穿戴电子设备可以将获取的行为特征发送至存储有预设模型的服务器,从服务器获取行为场景的识别结果。
在一些实施方式中,训练得到的预设模型可以存储于可穿戴电子设备本地,该训练得到的预设模型也可以在与可穿戴电子设备通信连接的服务器,将预设模型存储在服务器的方式,可以减少占用可穿戴电子设备的存储空间,提升可穿戴电子设备运行效率。
在一些实施方式中,预设模型可以周期性的或者不定期的获取新的训练数据,对该预设模型进行训练和更新。
步骤S430:获取所述传感器采集的行为数据。
步骤S440:对所述行为数据进行特征提取,获得行为特征。
步骤S450:将所述行为特征输入已训练的预设模型,获得所述可穿戴电子设备所处的当前行为场景,所述预设模型被预先训练,以根据输入的行为特征而输出行为场景的识别结果。
步骤S460:将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面。
在一些实施方式中,在将可穿戴电子设备显示的当前界面切换为当前行为场景对应的用户界面之前,该界面切换方法还可以包括:确定所述可穿戴电子设备中是否存在所述当前行为场景对应的用户界面;如果存在所述当前行为场景对应的用户界面,将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面;如 果不存在所述当前行为场景对应的用户界面,则获取所述可穿戴电子设备中存在的所有用户界面中每个用户界面的使用频率;根据所述每个用户界面的使用频率,将所述可穿戴电子设备显示的当前界面切换为使用频率最高的用户界面。
可以理解的,如果可穿戴电子设备中如果没有存储有识别到的当前行为场景对应的用户界面,则可穿戴电子设备此时不能将显示的当前界面切换为当前行为场景对应的用户界面。该情况下,可以将可穿戴电子设备显示的当前界面切换为使用频率最高的用户界面,以尽可能的使切换后的用户界面能满足用户的需求。
本申请实施例提供的界面切换方法,提供了预设模型的训练方法,通过被标注有行为场景的样本行为特征,对初始模型进行训练,从而得到已训练的预设模型,预设模型可以用于根据采集的行为数据对应的行为特征,而输出行为场景的识别结果。可穿戴电子设备通过获取行为数据,并提取行为特征,再将行为特征输入到已训练的预设模型,获得所处的当前行为场景,将显示的当前界面切换为当前行为场景对应的用户界面,实现对用户界面的自动切换,减少用户的操作,提升用户体验。
请参阅图9,其示出了本申请实施例提供的一种界面切换装置400的结构框图。该界面切换装置400应用于上述可穿戴电子设备,所述可穿戴电子设备包括用于采集行为数据的传感器。该界面切换装置400包括:数据获取模块410、特征获取模块420、场景识别模块430以及界面切换模块440。其中,所述数据获取模块410用于获取所述传感器采集的行为数据;所述特征获取模块420用于对所述行为数据进行特征提取,获得行为特征;所述场景识别模块430用于将所述行为特征输入已训练的预设模型,获得所述可穿戴电子设备所处的当前行为场景,所述预设模型被预先训练,以根据输入的行为特征而输出行为场景的识别结果;所述界面切换模块440用于将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面。
在一些实施方式中,场景识别模块430包括特征处理单元以及特征输入单元。特征处理单元用于对所述行为特征进行预处理,获得预处理后的行为特征;特征输入单元用于将所述预处理后的行为特征输入已训练的预设模型。
在该实施方式下,特征处理单元可以具体用于:对所述行为特征进行特征清洗以及特征挖掘。
在一些实施方式中,界面切换模块440还可以用于在将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面之后,当检测到切换操作时,将所述可穿戴电子设备显示的所述用户界面切换为切换操作对应的目标用户界面。
在一些实施方式中,该界面切换装置400还可以包括:模型校正模块。模型校正模块用于如果所述切换操作为预设时长内检测到的,并且目标用户界面与目标行为场景对应,将标注有所述目标行为场景的所述行为特征,输入所述预设模型,对所述预设模型进行较正训练。
在一些实施方式中,该界面切换装置400还可以包括:轨迹获取模块以及操作获取模块。轨迹获取模块用于获取所述可穿戴电子设备的晃动轨迹;操作获取模块用于如果所述晃动轨迹满足预设轨迹条件,确定检测到切换操作。
进一步的,界面切换模块440将所述可穿戴电子设备显示的所述用户界面切换为切换操作对应的目标用户界面,包括:获取所述晃动轨迹对应的目标用户界面;将所述可穿戴电子设备显示的所述用户界面切换为所述目标用户界面。
在一些实施方式中,该界面切换装置400还可以包括:对应关系获取模块。对应关系获取模块用于在所述将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面之前,获取行为场景与用户界面的对应关系。
在该实施方式下,界面切换模块440可以包括:界面确定单元,用于根据所述对应关系,确定所述当前行为场景对应的用户界面;切换执行单元,用于将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面。
在该实施方式下,对应关系获取模块可以包括:界面显示单元,用于显示设置界面,所述设置界面用于设置行为场景与用户界面的对应关系;对应关系设置单元,用于根据于所述设置界面中检测到的设置操作,设置行为场景与用户界面的对应关系,并将所述对应关系进行存储。
进一步的,界面显示单元可以具体用于:获取所述可穿戴电子设备中当前存在的多种样式的用户界面,以及所述预设模型可识别的多种行为场景;显示包括多个第一选项以及多个第二选项的设置界面,所述第一选项与所述多种样式的用户界面一一对应,所述第二选项与所述多种行为场景一一对应。
在一些实施方式中,该界面切换装置400还可以包括:数据获取模块以及模型更新模块。据获取模块用于获取场景增添数据,所述场景增添数据包括新的行为场景及其对应的行为特征;模型更新模块用于根据新的行为场景及其对应的行为特征,对所述预设模型进行更新。
进一步的,该界面切换装置400还可以包括:对应关系更新模块,用于在所述根据新的行为场景及其对应的行为特征,对所述预设模型进行更新之后,根据检测到的更新操作,将所述新的行为场景与所述多种样式的用户界面中任一用户界面关联后,对所述对应关系进行更新。
在该实施方式下,对应关系获取模块可以具体用于:接收服务器发送的行为场景与用户界面的对应关系,所述对应关系为移动终端根据检测到的对行为场景与用户界面的关联操作,生成所述对应关系后发送至所述服务器,其中,所述移动终端与所述可穿戴电子设备关联。
在一些实施方式中,该界面切换装置400还可以包括:训练数据获取模块,用于获取训练数据集合,所述训练数据集合包括被标注有行为场景的样本行为特征;模型训练模块,用于将所述训练数据集合输入神经网络,对所述神经网络进行训练,获得已训练的所述预设模型,所述预设模型能够根据行为特征确定该行为特征对应的行为场景。
在一些实施方式中,界面切换模块440可以具体用于:确定所述可穿戴电子设备中是否存在所述当前行为场景对应的用户界面;如果存在所述当前行为场景对应的用户界面,将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面。
进一步的,界面切换模块440还可以用于:如果不存在所述当前行为场景对应的用户界面,则获取所述可穿戴电子设备中存在的所有用户界面中每个用户界面的使用频率;根据所述每个用户界面的使用频率,将所述可穿戴电子设备显示的当前界面切换为使用频率最高的用户界面。
在一些实施方式中,所述行为数据包括:定位数据、运动数据、生物特征数据、环境数据以及时间数据中的至少一种;所述用户界面包括表盘界面、主屏幕界面、锁屏界面、或者应用界面。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,模块相互之间的耦合可以是电性,机械或其它形式的耦合。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
综上所述,通过获取可穿戴电子设备的传感器采集的行为数据,对行为数据进行特征提取,获得行为特征,将行为特征输入已训练的预设模型,获得可穿戴电子设备所处的当前行为场景,该预设模型被预先训练,以根据输入的行为特征而输出行为场 景的识别结果,然后将所可穿戴电子设备显示的当前界面切换为当前行为场景对应的用户界面,从而实现根据传感器采集的行为数据对应的行为特征,自动识别可穿戴电子设备所处的当前行为场景,并将显示的当前界面切换为当前行为场景对应的用户界面,减少切换用户界面中用户的操作,提升用户体验。
请参考图10,其示出了本申请实施例提供的一种可穿戴电子设备的结构框图。该可穿戴电子设备100可以是智能手表、智能手环、智能眼镜等能够运行应用程序的电子设备。本申请中的可穿戴电子设备100可以包括一个或多个如下部件:处理器110、存储器120以及一个或多个应用程序,其中一个或多个应用程序可以被存储在存储器120中并被配置为由一个或多个处理器110执行,一个或多个程序配置用于执行如前述方法实施例所描述的方法。
处理器110可以包括一个或者多个处理核。处理器110利用各种接口和线路连接整个可穿戴电子设备100内的各个部分,通过运行或执行存储在存储器120内的指令、程序、代码集或指令集,以及调用存储在存储器120内的数据,执行可穿戴电子设备100的各种功能和处理数据。可选地,处理器110可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器110可集成中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器110中,单独通过一块通信芯片进行实现。
存储器120可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。存储器120可用于存储指令、程序、代码、代码集或指令集。存储器120可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等。存储数据区还可以存储终端100在使用中所创建的数据(比如电话本、音视频数据、聊天记录数据)等。
请参考图11,其示出了本申请实施例提供的一种计算机可读存储介质的结构框图。该计算机可读介质800中存储有程序代码,所述程序代码可被处理器调用执行上述方法实施例中所描述的方法。
计算机可读存储介质800可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质800包括非易失性计算机可读介质(non-transitory computer-readable storage medium)。计算机可读存储介质800具有执行上述方法中的任何方法步骤的程序代码810的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码810可以例如以适当形式进行压缩。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (20)

  1. 一种界面切换方法,其特征在于,应用于可穿戴电子设备,所述可穿戴电子设备包括用于采集行为数据的传感器,所述方法包括:
    获取所述传感器采集的行为数据;
    对所述行为数据进行特征提取,获得行为特征;
    将所述行为特征输入已训练的预设模型,获得所述可穿戴电子设备所处的当前行为场景,所述预设模型被预先训练,以根据输入的行为特征而输出行为场景的识别结果;
    将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面。
  2. 根据权利要求1所述的方法,其特征在于,所述将所述行为特征输入已训练的预设模型,包括:
    对所述行为特征进行预处理,获得预处理后的行为特征;
    将所述预处理后的行为特征输入已训练的预设模型。
  3. 根据权利要求2所述的方法,其特征在于,所述对所述行为特征进行预处理,包括:
    对所述行为特征进行特征清洗以及特征挖掘。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,在将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面之后,所述方法还包括:
    当检测到切换操作时,将所述可穿戴电子设备显示的所述用户界面切换为切换操作对应的目标用户界面。
  5. 根据权利要求4所述的方法,其特征在于,在将所述可穿戴电子设备显示的所述用户界面切换为目标行为场景对应的目标用户界面之后,所述方法还包括:
    如果所述切换操作为预设时长内检测到的,并且目标用户界面与目标行为场景对应,将标注有所述目标行为场景的所述行为特征,输入所述预设模型,对所述预设模型进行较正训练。
  6. 根据权利要求4或5所述的方法,其特征在于,在当检测到切换操作时,根据所述切换操作,将所述可穿戴电子设备显示的所述用户界面切换为切换操作对应的目标用户界面之前,所述方法还包括:
    获取所述可穿戴电子设备的晃动轨迹;
    如果所述晃动轨迹满足预设轨迹条件,确定检测到切换操作。
  7. 根据权利要求6所述的方法,其特征在于,所述将所述可穿戴电子设备显示的所述用户界面切换为切换操作对应的目标用户界面包括:
    获取所述晃动轨迹对应的目标用户界面;
    将所述可穿戴电子设备显示的所述用户界面切换为所述目标用户界面。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,在所述将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面之前,所述方法还包括:
    获取行为场景与用户界面的对应关系;
    所述将所述可穿戴电子设备显示的当前界面切换为所述行为场景对应的用户界面,包括:
    根据所述对应关系,确定所述当前行为场景对应的用户界面;
    将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面。
  9. 根据权利要求8所述的方法,其特征在于,所述获取行为场景与用户界面的对应关系,包括:
    显示设置界面,所述设置界面用于设置行为场景与用户界面的对应关系;
    根据于所述设置界面中检测到的设置操作,设置行为场景与用户界面的对应关系,并将所述对应关系进行存储。
  10. 根据权利要求9所述的方法,其特征在于,所述显示设置界面,包括:
    获取所述可穿戴电子设备中当前存在的多种样式的用户界面,以及所述预设模型可识别的多种行为场景;
    显示包括多个第一选项以及多个第二选项的设置界面,所述第一选项与所述多种样式的用户界面一一对应,所述第二选项与所述多种行为场景一一对应。
  11. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    获取场景增添数据,所述场景增添数据包括新的行为场景及其对应的行为特征;
    根据新的行为场景及其对应的行为特征,对所述预设模型进行更新。
  12. 根据权利要求11所述的方法,其特征在于,在所述根据新的行为场景及其对应的行为特征,对所述预设模型进行更新之后,所述方法还包括:
    根据检测到的更新操作,将所述新的行为场景与所述多种样式的用户界面中任一用户界面关联后,对所述对应关系进行更新。
  13. 根据权利要求8所述的方法,其特征在于,所述获取行为场景与用户界面的对应关系,包括:
    接收服务器发送的行为场景与用户界面的对应关系,所述对应关系为移动终端根据检测到的对行为场景与用户界面的关联操作,生成所述对应关系后发送至所述服务器,其中,所述移动终端与所述可穿戴电子设备关联。
  14. 根据权利要求1-13任一项所述的方法,其特征在于,所述预设模型通过如下步骤训练得到:
    获取训练数据集合,所述训练数据集合包括被标注有行为场景的样本行为特征;
    将所述训练数据集合输入神经网络,对所述神经网络进行训练,获得已训练的所述预设模型,所述预设模型能够根据行为特征确定该行为特征对应的行为场景。
  15. 根据权利要求1-14任一项所述的方法,其特征在于,在所述将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面之前,所述方法还包括:
    确定所述可穿戴电子设备中是否存在所述当前行为场景对应的用户界面;
    如果存在所述当前行为场景对应的用户界面,将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面。
  16. 根据权利要求15所述的方法,其特征在于,在所述确定所述可穿戴电子设备中是否存在所述当前行为场景对应的用户界面之后,所述方法还包括:
    如果不存在所述当前行为场景对应的用户界面,则获取所述可穿戴电子设备中存在的所有用户界面中每个用户界面的使用频率;
    根据所述每个用户界面的使用频率,将所述可穿戴电子设备显示的当前界面切换为使用频率最高的用户界面。
  17. 根据权利要求1-16任一项所述的方法,其特征在于,所述行为数据包括:定位数据、运动数据、生物特征数据、环境数据以及时间数据中的至少一种;
    所述用户界面包括表盘界面、主屏幕界面、锁屏界面、或者应用界面。
  18. 一种界面切换装置,其特征在于,应用于可穿戴电子设备,所述可穿戴电子设备包括用于采集行为数据的传感器,所述装置包括:数据获取模块、特征获取模块、场景识别模块以及界面切换模块,其中,
    所述数据获取模块用于获取所述传感器采集的行为数据;
    所述特征获取模块用于对所述行为数据进行特征提取,获得行为特征;
    所述场景识别模块用于将所述行为特征输入已训练的预设模型,获得所述可穿戴电子设备所处的当前行为场景,所述预设模型被预先训练,以根据输入的行为特征而输出行为场景的识别结果;
    所述界面切换模块用于将所述可穿戴电子设备显示的当前界面切换为所述当前行为场景对应的用户界面。
  19. 一种可穿戴电子设备,其特征在于,包括:
    一个或多个处理器;
    存储器;
    一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行如权利要求1-17任一项所述的方法。
  20. 一种计算机可读取存储介质,其特征在于,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用执行如权利要求1-17任一项所述的方法。
PCT/CN2019/114076 2019-10-29 2019-10-29 界面切换方法、装置、可穿戴电子设备及存储介质 WO2021081768A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980099239.XA CN114223139B (zh) 2019-10-29 2019-10-29 界面切换方法、装置、可穿戴电子设备及存储介质
PCT/CN2019/114076 WO2021081768A1 (zh) 2019-10-29 2019-10-29 界面切换方法、装置、可穿戴电子设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/114076 WO2021081768A1 (zh) 2019-10-29 2019-10-29 界面切换方法、装置、可穿戴电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021081768A1 true WO2021081768A1 (zh) 2021-05-06

Family

ID=75714716

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/114076 WO2021081768A1 (zh) 2019-10-29 2019-10-29 界面切换方法、装置、可穿戴电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN114223139B (zh)
WO (1) WO2021081768A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113960911A (zh) * 2021-11-02 2022-01-21 珠海读书郎软件科技有限公司 一种运动手表的表盘自动生成和切换的系统及方法
CN116173484A (zh) * 2023-03-03 2023-05-30 乐渊网络科技(上海)有限公司 运动数据的处理方法、装置及电子设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114675740A (zh) * 2022-03-29 2022-06-28 西安歌尔泰克电子科技有限公司 一种腕带设备的表盘切换方法、装置、系统及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015037804A1 (en) * 2013-09-11 2015-03-19 Lg Electronics Inc. Wearable computing device and user interface method
CN106598222A (zh) * 2016-11-14 2017-04-26 上海斐讯数据通信技术有限公司 一种场景模式的切换方法和系统
CN107422944A (zh) * 2017-06-09 2017-12-01 广东乐心医疗电子股份有限公司 一种自动调整菜单显示模式的方法与装置以及可穿戴设备
CN108703760A (zh) * 2018-06-15 2018-10-26 安徽中科智链信息科技有限公司 基于九轴传感器的人体运动姿态识别系统及方法
CN108764059A (zh) * 2018-05-04 2018-11-06 南京邮电大学 一种基于神经网络的人体行为识别方法及系统
CN108831526A (zh) * 2018-05-21 2018-11-16 四川斐讯信息技术有限公司 一种具有识别功能的智能可穿戴运动设备及其识别方法
CN110010224A (zh) * 2019-03-01 2019-07-12 出门问问信息科技有限公司 用户运动数据处理方法、装置、可穿戴设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107817891B (zh) * 2017-11-13 2020-01-14 Oppo广东移动通信有限公司 屏幕控制方法、装置、设备及存储介质
CN110134316B (zh) * 2019-04-17 2021-12-24 华为技术有限公司 模型训练方法、情绪识别方法及相关装置和设备
CN110334497B (zh) * 2019-06-28 2021-10-26 Oppo广东移动通信有限公司 显示界面的切换方法和穿戴式电子设备、存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015037804A1 (en) * 2013-09-11 2015-03-19 Lg Electronics Inc. Wearable computing device and user interface method
CN106598222A (zh) * 2016-11-14 2017-04-26 上海斐讯数据通信技术有限公司 一种场景模式的切换方法和系统
CN107422944A (zh) * 2017-06-09 2017-12-01 广东乐心医疗电子股份有限公司 一种自动调整菜单显示模式的方法与装置以及可穿戴设备
CN108764059A (zh) * 2018-05-04 2018-11-06 南京邮电大学 一种基于神经网络的人体行为识别方法及系统
CN108831526A (zh) * 2018-05-21 2018-11-16 四川斐讯信息技术有限公司 一种具有识别功能的智能可穿戴运动设备及其识别方法
CN108703760A (zh) * 2018-06-15 2018-10-26 安徽中科智链信息科技有限公司 基于九轴传感器的人体运动姿态识别系统及方法
CN110010224A (zh) * 2019-03-01 2019-07-12 出门问问信息科技有限公司 用户运动数据处理方法、装置、可穿戴设备及存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113960911A (zh) * 2021-11-02 2022-01-21 珠海读书郎软件科技有限公司 一种运动手表的表盘自动生成和切换的系统及方法
CN113960911B (zh) * 2021-11-02 2022-09-20 珠海读书郎软件科技有限公司 一种运动手表的表盘自动生成和切换的系统及方法
CN116173484A (zh) * 2023-03-03 2023-05-30 乐渊网络科技(上海)有限公司 运动数据的处理方法、装置及电子设备

Also Published As

Publication number Publication date
CN114223139A (zh) 2022-03-22
CN114223139B (zh) 2023-11-24

Similar Documents

Publication Publication Date Title
WO2021081768A1 (zh) 界面切换方法、装置、可穿戴电子设备及存储介质
CN110765939B (zh) 身份识别方法、装置、移动终端及存储介质
CN110135497B (zh) 模型训练的方法、面部动作单元强度估计的方法及装置
CN104065928A (zh) 一种行为模式统计装置与方法
CN104616002A (zh) 用于年龄段判断的面部识别设备
CN111144344B (zh) 人物年龄的确定方法、装置、设备及存储介质
CN103942243A (zh) 显示设备以及使用该显示设备提供消费者构建信息的方法
CN111027507A (zh) 基于视频数据识别的训练数据集生成方法及装置
CN111524513A (zh) 一种可穿戴设备及其语音传输的控制方法、装置及介质
CN113128368A (zh) 一种人物交互关系的检测方法、装置及系统
CN111967770A (zh) 基于大数据的问卷调查数据处理方法、装置及存储介质
JP2013157984A (ja) Ui提供方法およびそれを適用した映像受信装置
CN113111782A (zh) 基于显著对象检测的视频监控方法及装置
CN114513694B (zh) 评分确定方法、装置、电子设备和存储介质
CN112580472A (zh) 一种快速轻量的人脸识别方法、装置、机器可读介质及设备
WO2021147473A1 (zh) 一种模型训练方法、内容生成方法以及相关装置
CN106155707B (zh) 信息处理方法及电子设备
CN112906599A (zh) 一种基于步态的人员身份识别方法、装置及电子设备
CN110716632A (zh) 一种电池电量管理方法及智能终端
CN107832690B (zh) 人脸识别的方法及相关产品
CN114005174A (zh) 工作状态的确定方法、装置、电子设备及存储介质
US20220101871A1 (en) Live streaming control method and apparatus, live streaming device, and storage medium
CN111797127B (zh) 时序数据分割方法、装置、存储介质及电子设备
CN110556099B (zh) 一种命令词控制方法及设备
CN112580543A (zh) 行为识别方法、系统及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19950596

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19950596

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 18/10/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19950596

Country of ref document: EP

Kind code of ref document: A1