CN111371955A - Response method, mobile terminal and computer storage medium - Google Patents

Response method, mobile terminal and computer storage medium Download PDF

Info

Publication number
CN111371955A
CN111371955A CN202010364445.3A CN202010364445A CN111371955A CN 111371955 A CN111371955 A CN 111371955A CN 202010364445 A CN202010364445 A CN 202010364445A CN 111371955 A CN111371955 A CN 111371955A
Authority
CN
China
Prior art keywords
user
state
information
scene type
mobile terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010364445.3A
Other languages
Chinese (zh)
Inventor
张李平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Microphone Holdings Co Ltd
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Microphone Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Microphone Holdings Co Ltd filed Critical Shenzhen Microphone Holdings Co Ltd
Publication of CN111371955A publication Critical patent/CN111371955A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72457User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to geographic location
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72466User interfaces specially adapted for cordless or mobile telephones with selection means, e.g. keys, having functions defined by the mode or the status of the device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72484User interfaces specially adapted for cordless or mobile telephones wherein functions are triggered by incoming communication events

Abstract

The invention discloses a response method, a mobile terminal and a computer storage medium, wherein the response method is applied to the mobile terminal and comprises the following steps: acquiring a scene type and/or a user state and/or a user habit; and setting the working mode or state of the mobile terminal according to the scene type and/or the user state and/or the user habit. According to the response method, the mobile terminal and the computer storage medium provided by the invention, the mobile terminal automatically sets the working mode or state of the mobile terminal according to the scene type and/or the user state and/or the user habit, so that the self-adaptive response to the working mode or state of the mobile terminal according to the scene type and/or the user state and/or the user habit is realized, and the user experience is improved.

Description

Response method, mobile terminal and computer storage medium
Technical Field
The present invention relates to the field of terminals, and in particular, to a response method, a mobile terminal, and a computer storage medium.
Background
With the rapid development of mobile communication technology and the popularization of mobile terminals, mobile terminals such as mobile phones have become an indispensable part of people's lives. The user can correspondingly adjust the contextual model of the mobile terminal according to different use environments so as to meet the requirements of different use scenes. In the related art, a user usually needs to switch the contextual model of the mobile terminal in advance in a manual manner according to different usage scenarios. Meanwhile, in the same profile, the response mode of the mobile terminal to the communication event may be the same regardless of whether the scene where the mobile terminal is located is the same and whether the received communication event is the same. For example, before a meeting, a mobile phone is manually switched to a conference mode to respond to a received communication event such as an incoming call and a short message in a vibration mode. However, if the profile switching is not timely performed or information such as a scene, a communication event type and the like is not considered, the mobile terminal may respond in an untimely manner when receiving the communication event, which may affect the user experience.
Disclosure of Invention
The invention aims to provide a response method, a mobile terminal and a computer storage medium, which can realize self-adaptive response to the working mode or state of the mobile terminal according to scene types and/or user states and/or user habits, and improve the user experience.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a response method, applied to a mobile terminal, including:
acquiring a scene type and/or a user state and/or a user habit;
and setting the working mode or state of the mobile terminal according to the scene type and/or the user state and/or the user habit.
As an embodiment, the acquiring the scene type and/or the user state includes:
acquiring environment information and/or displayed multimedia information and/or biological characteristic information of a user, wherein the environment information comprises at least one of position information, spatial image information and spatial sound information;
determining a scene type according to position information and/or spatial image information and/or multimedia information and/or biological feature information of the user in the environment information;
and determining the user state according to the spatial sound information and/or the spatial image information and/or the multimedia information and/or the biological characteristic information of the user in the environment information.
As an embodiment, the determining the scene type according to the position information and/or the spatial image information and/or the multimedia information and/or the biometric information of the user in the environment information includes:
and inquiring the corresponding relation between the set position information and the scene type according to the position information in the environment information, and acquiring the scene type corresponding to the position information in the environment information.
As an embodiment, the determining the user state according to the spatial sound information and/or the spatial image information and/or the multimedia information and/or the biometric information of the user in the environment information includes:
identifying spatial sound information in the environment information and/or biological characteristic information of the user, and determining whether the user state is a man-machine separation state according to an obtained identification result; alternatively, the first and second electrodes may be,
and acquiring the number and the volume of different timbres according to the spatial sound information in the environment information, and determining whether the user state is a party state or not according to the number and the volume of the different timbres.
As one embodiment, the determining the user state according to the spatial sound information and/or the spatial image information and/or the multimedia information and/or the biometric information of the user in the environment information includes:
and performing expression recognition on the image of the user face, and determining the user state according to the obtained expression recognition result.
As an embodiment, the obtaining the scene type includes:
acquiring a scene and/or a user state set by a user;
and determining the scene type according to the scene set by the user and/or the user state.
As an implementation manner, the setting of the operation mode of the mobile terminal according to the scene type and/or the user state and/or the user habit includes:
and setting at least one of display parameters, sound parameters, electric quantity modes and function control of the mobile terminal according to the scene type and/or the user state and/or the user habit.
As one embodiment, the setting of the operating mode or state of the mobile terminal according to the scene type and/or the user state and/or the user habit includes:
and responding to a preset event according to the scene type and/or the user state and/or the user habit.
As an embodiment, the responding to the preset event according to the scene type and/or the user state and/or the user habit includes:
when the preset event is a first preset event, inquiring a first response mode according to the scene type, the user state and the first preset event, and responding to the preset event according to the first response mode;
when the preset event is a second preset event, inquiring a second response mode according to the scene type, the user state and the second preset event, and responding to the preset event according to the second response mode;
wherein the first preset event is different from the second preset event.
As an embodiment, the responding to the preset event according to the scene type and/or the user state and/or the user habit includes:
acquiring the grade of a preset event according to the content of the preset event;
when the grade of the preset event is a first preset grade, inquiring a third response mode according to the scene type, the user state, the preset event and the first preset grade, and responding to the preset event according to the third response mode;
when the grade of the preset event is a second preset grade, inquiring a fourth response mode according to the scene type, the user state, the preset event and the second preset grade, and responding to the preset event according to the fourth response mode;
wherein the third response mode is different from the fourth response mode.
As an embodiment, before the obtaining the scene type and/or the user state and/or the user habit, the method further includes:
detecting whether the intelligent contextual model is in an open state;
and when the intelligent contextual model is detected to be in the opening state, executing the step of acquiring the scene type and/or the user state and/or the user habit.
As one of the implementation modes, the method further comprises the following steps:
acquiring the state of equipment associated with the mobile terminal;
the setting of the working mode or state of the mobile terminal according to the scene type and/or the user state and/or the user habit includes:
and setting the working mode or state of the mobile terminal or the equipment according to the scene type and/or the user state and/or the user habit and/or the state of the equipment associated with the mobile terminal.
As one of the implementation modes, the method further comprises the following steps:
receiving an adjusting instruction input by a user;
and according to the adjusting instruction, the working mode or state of the mobile terminal is determined.
In a second aspect, an embodiment of the present invention provides a mobile terminal, where the mobile terminal includes a processor and a storage device for storing a program; when executed by the processor, cause the processor to implement the response method of the first aspect.
In a third aspect, an embodiment of the present invention provides a computer storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the response method according to the first aspect.
The response method, the mobile terminal and the computer storage medium provided by the embodiment of the invention are applied to the mobile terminal and comprise the following steps: acquiring a scene type and/or a user state and/or a user habit; and setting the working mode or state of the mobile terminal according to the scene type and/or the user state and/or the user habit. Therefore, the mobile terminal automatically sets the working mode or state of the mobile terminal according to the scene type and/or the user state and/or the user habit, so that the self-adaptive response to the working mode or state of the mobile terminal according to the scene type and/or the user state and/or the user habit is realized, and the user experience is improved.
Drawings
Fig. 1 is a schematic flow chart of a response method according to an embodiment of the present invention;
FIG. 2 is a first schematic diagram of a notification bar of a mobile phone;
fig. 3 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
FIG. 4 is a second schematic view of a notification bar of a mobile phone;
fig. 5 is a schematic flowchart of a response method according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further elaborated by combining the drawings and the specific embodiments in the specification. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Referring to fig. 1, a response method provided for an embodiment of the present invention includes the following steps:
step S101: acquiring a scene type and/or a user state and/or a user habit;
it should be noted that the response method may be applied to a mobile terminal, where the mobile terminal may specifically be an electronic device such as a smart phone, a personal digital assistant, a tablet computer, and the like, and an intelligent contextual model is set in the mobile terminal, and when the intelligent contextual model is in an on state, the mobile terminal may set a working mode or state of the mobile terminal according to a scene type and/or a user state and/or a user habit, so as to perform contextual feedback according to the working mode or state set by the mobile terminal, thereby implementing adaptive response according to the scene type and/or the user state and/or the user habit and the working mode or state of the mobile terminal. In an embodiment, in step S101, before the acquiring the scene type and/or the user state and/or the user habit, the method may further include: detecting whether the intelligent contextual model is in an open state; and when the intelligent contextual model is detected to be in the opening state, executing the step of acquiring the scene type and/or the user state and/or the user habit. The mobile terminal is provided with a switch button of the intelligent contextual model, and the intelligent contextual model can be turned on or off by switching the switch button, so that the mobile terminal can detect whether the intelligent contextual model is in an on state or not. Here, when detecting that the intelligent contextual model is turned on, the mobile terminal automatically acquires the scene type and/or the user state and/or the user habit to implement the response method provided by the embodiment, that is, the mobile terminal can respond to the working mode or state of the mobile terminal according to the scene type and/or the user state and/or the user habit. In practical applications, taking a mobile terminal as a mobile phone as an example, as shown in fig. 2, the switch button of the intelligent contextual model may be disposed in a notification bar of the mobile phone, and the mobile terminal may be switched between the intelligent contextual model and a normal contextual model by performing a switch operation on the switch button, where the normal contextual model includes ringing, vibrating, muting, not disturbing, and the like. Therefore, whether the intelligent contextual model is in the opening state or not is detected to execute corresponding operation, the intelligent contextual model is convenient for a user to use flexibly, and the user use experience is further improved.
It can be understood that the scene type refers to a type of a current scene where the mobile terminal is located, and in practical application, various scenes in life may be classified according to different preset rules, for example, the scenes in life may be divided into: indoor, outdoor etc. also can be according to the function of scene specifically divide into in the life scene: conference room, office, classroom, road, bedroom, bus, market, bar, cinema etc. also can divide into according to the user to the cell-phone mode configuration: conference mode, sleep mode, sport mode, entertainment mode, shopping mode. The user state refers to the state of the current user, and can include an expression state, a body posture, a man-machine separation state, an activity participation state, an entertainment state and the like of the user. The expression state is used for representing states presented by facial expressions, including but not limited to a thinking state, a sleeping state and the like, the body posture is used for representing action behaviors presented by a human body, including but not limited to walking, running, sitting and the like, the man-machine separation state refers to a state that the mobile terminal and the user are separated, namely the user is not near the mobile terminal, the activity participation state refers to a state corresponding to activities in which the user participates, such as party gathering, meeting taking and the like, and the entertainment state refers to a state corresponding to the user who uses the mobile terminal, including WeChat use, game play, picture taking and the like. The user habits refer to things which a user often does under a certain scene type, the terminal carries out statistical analysis on the user use data, obtains the habit operation or habit setting or terminal working mode or state of the user under different scene types and/or user states according to the analysis, and automatically executes or sets the habit operation or habit setting or terminal working mode or state of the user according to the user habit data when the same scene type exists; optionally, besides setting the terminal working mode or state based on the scene type and the user habit, the terminal working mode or state can also be set based on the user state so as to improve the setting accuracy; alternatively, the terminal operation mode or state may be set based on the user state and the user habit. For example, a user is accustomed to turning on a vibration + sound reminding function in a motion mode, and when the user is in the motion mode or in a scene type such as a sports field, the terminal automatically sets the reminding mode to vibration + sound according to user habit data; optionally, sometimes, although the user is in a sport mode or a scene type such as a sport field, the user state is detected by the sensor to be not sport, and the user is not started, and the reminding mode is automatically set to vibration + sound until the user starts to exercise; optionally, the user habit turns on a vibration + sound reminding function during running, and when the user state is running, the terminal automatically sets a reminding mode to vibration + sound according to the user habit data; for example, when a user generally takes a car, the bluetooth connection player is firstly started and music is played, the sound effect is dynamic, when the user is in the car or drives the car, the bluetooth is automatically started to play the music or the music is played through a mobile phone, and meanwhile, the sound effect, namely sound parameters, is set to be dynamic; optionally, if the user state is driving, the music is played through the vehicle-mounted player, and if the user state is passenger, the music is played only through the mobile phone. For example, the user is used to turn on the super power saving mode or turn down the screen display brightness in the sleep mode, and when the user turns on the sleep mode next time, the terminal is automatically set to the super power saving mode or the screen display brightness is reduced. And acquiring the scene type, the user state and the user habit data.
Here, due to the diversity of the scene type and the user state, the scene type and the user state may be acquired according to different information, respectively. In an embodiment, the obtaining the scene type and/or the user status includes: acquiring environment information, wherein the environment information comprises at least one of position information, space image information and space sound information; determining a scene type according to position information and/or spatial image information in the environment information; and determining the user state according to the spatial sound information and/or the spatial image information in the environment information. It can be understood that the location information refers to information such as a current location direction and a current location of the mobile terminal, and the location information may directly include information that can represent different scene types, such as a road name, a market name, a cell name, and the like. Under the condition that a user starts a positioning function or a position function of the mobile terminal, the mobile terminal can acquire the position information through a global positioning system technology or a navigation client and the like. Accordingly, the scene type can be determined by the acquired location information. For example, if the location information acquired by the mobile terminal is an a park, it may be determined that the corresponding scene type is outdoor; and assuming that the position information acquired by the mobile terminal is cell B1, it can be determined that the corresponding scene type is indoor. It should be noted that, in order to save cost and reduce power consumption of the mobile terminal, the environment information of the current location may be obtained at a set period, for example, the environment information of the current location is obtained every five minutes or ten minutes.
Here, the mobile terminal may have an image capture device to capture spatial image information in an environment in which the mobile terminal is located by activating the image capture device of the mobile terminal. In this embodiment, when it is detected that the function of the "intelligent contextual model" is turned on, the image acquisition device of the mobile terminal is started to acquire the spatial image information in the environment where the image acquisition device is located, and the processor of the mobile terminal acquires the acquired spatial image information. Taking a mobile terminal as a mobile phone with front and rear cameras as an example, an image of an environment on the front side of the mobile phone can be collected through the front camera of the mobile phone, and an image of an environment on the back side of the mobile phone can be collected through the rear camera of the mobile phone. Since the aerial image information contains the scene of the environment where the mobile terminal is located, the scene comprises information which can represent the scene, such as people, buildings, furnishings or decorations, and the like, the scene type can be determined according to the aerial image information in the current environment information. For example, assuming that the environment in which the mobile terminal is currently located is a closed room, and multiple people sit around a table with their heads facing one end of the table, it is recognized from the spatial image information that the current scene type may be a conference room. Further, assuming that it is recognized from the spatial image information that the environment in which the mobile terminal is currently located is that a person sits on a seat, the person stands beside the seat by means of an armrest, and there is a running vehicle outside a window, it is described that the current scene type may be inside a bus. In addition, since the scene of the environment in which the mobile terminal is located contained in the spatial image information may also represent information of the user state, the user state may be determined according to the spatial image information in the current environment information. For example, if it is recognized from the spatial image information that the environment in which the mobile terminal is currently located is that a person sits on a seat, the person stands beside the seat by means of an armrest, and there is a running vehicle outside a window, the current user state is assumed to be sitting. Furthermore, it is assumed that the environment where the mobile terminal is currently located is identified according to the spatial image information, that multiple people are in a closed room and make bodies of the multiple people perform the same action at the same time, and the current user state is dance.
It should be noted that, since the scene type may not be accurately determined based on the position information or the aerial image information alone in some cases, the position information and the aerial image information need to be combined for accurate determination. For example, if the location information is displayed as a C cell, it is not possible to determine whether the scene type is indoor or outdoor, and if the environment in which the corresponding spatial image information is displayed is a closed room, it is determined that the corresponding scene type is indoor; and if the corresponding space image information shows that objects such as trees, kiosks and the like exist in the environment, determining that the corresponding scene type is outdoor. If the environment where the corresponding spatial image information is displayed is that people walk around and desktop computers are placed on a plurality of desks, the corresponding scene type can be determined to be the office; and if only one person in the environment where the corresponding spatial image information is displayed speaks towards the projection screen, determining that the corresponding scene type is a conference room. Therefore, the scene type can be accurately judged through the position information and/or the spatial image information in the environmental information, so that the response accuracy is further improved.
Here, the mobile terminal may have a sound collection device to collect spatial sound information in an environment where the mobile terminal is located by activating the sound collection device of the mobile terminal. The spatial sound information includes information of sound segments present in all human hearing ranges in the environment in which the mobile terminal is located, including but not limited to timbre, sound frequency, volume, etc. In this embodiment, when the function of the "intelligent contextual model" is detected to be turned on, the sound collection device of the mobile terminal is started to collect spatial sound information in the environment where the sound collection device is located, and the processor of the mobile terminal obtains the collected spatial sound information. The spatial sound information may include sound emitted by the mobile terminal itself due to the operation of the multimedia application, or sound emitted by people or devices located around the mobile terminal. Taking a mobile terminal as an example of a mobile phone, spatial sound information in an environment where the mobile phone is currently located can be collected through a microphone of the mobile phone. Since the spatial sound information records sound information of the environment where the mobile terminal is located, the sound information may include sound information emitted by a user of the mobile terminal and sound information emitted by other users, and the sound information may represent the state of the user, the current user state may be determined according to the spatial sound information in the current environment information. For example, if the spatial sound information shows that the current environment is quite quiet and has only breathing sound, it indicates that the user may be in a sleep state; if the spatial sound information shows that the mobile terminal is playing songs, it indicates that the user may be in a song listening state.
It should be noted that, since the user state may not be accurately determined based on the spatial image information or the spatial sound information alone in some cases, the spatial image information and the spatial sound information need to be combined for accurate determination. Taking the mobile terminal as a mobile phone with front and rear cameras as an example, if it is recognized that the current environment of the mobile phone is a closed room according to the spatial image information acquired by the front and rear cameras of the mobile phone, and multiple people sit around a desk and all use pens to record on a notebook, it indicates that the user status may be a meeting, however, the status of the user in the meeting may be further divided into a listening status in the meeting, a discussion status in the meeting, an open conversation status in the meeting, and the like, where the listening status in the meeting refers to that only one speaker is speaking and the rest of the people are listening to the speech of the speaker, the discussion status in the meeting refers to one-to-one conversation communication, and the open conversation status in the meeting refers to one-to-many or many-to-many conversation communication. Therefore, the user-specific state may not be known from the aerial image information alone. However, when the cellular phone is located in a conference room, the spatial sound information collected by the microphone of the cellular phone includes not only the sound of the speaker but also the sound of conference participants such as a commentator and a questioner. Therefore, if only one person is confirmed to be speaking according to the spatial sound information, the current user state is a listening state in the conference; if two persons speak is determined according to the spatial sound information, the current user state is a conference-opening discussion state; and if the fact that a plurality of people speak with each other is determined according to the spatial sound information, the current user state is a conference opening conversation state. Therefore, the user state can be accurately judged through the spatial sound information and/or the spatial image information in the environmental information, so that the response accuracy is further improved.
In an embodiment, the determining a scene type according to the location information in the environment information may include: and inquiring the corresponding relation between the set position information and the scene type according to the position information in the environment information, and acquiring the scene type corresponding to the position information in the environment information. It can be understood that, in combination with the daily use condition of the mobile terminal, the corresponding relationship between the location information and the scene type can be established in advance, so that the corresponding scene type can be known by querying the corresponding relationship according to the obtained location information. For example, if the scene types are divided into two types, namely home and company, the corresponding relationship between the location information of the home and the residential cell and the corresponding relationship between the company and the location information of the place where the company is located can be established, so that the corresponding scene type can be directly obtained through the corresponding relationship according to the location information. It should be noted that the mobile terminal may also obtain the location information of the mobile terminal through a connected wireless network, and if different locations such as offices, conference rooms, and the like in the company are respectively provided with a wireless network, and when the mobile terminal enters different network coverage ranges, the mobile terminal automatically switches access to obtain identification information such as the location information of the accessed wireless network, and through pre-establishing a corresponding relationship between the location information of each wireless network and a scene type, the mobile terminal may obtain a corresponding scene type according to the location information of the accessed wireless network. It should be noted that the scene type and the user state may also be determined according to the position information, the spatial image information, and the spatial sound information in the current environment information. Therefore, by establishing the corresponding relation between the position information and the scene type, the corresponding scene type can be conveniently obtained according to the position information, and the response processing speed is improved.
In an embodiment, the obtaining the scene type and the user status includes: acquiring the displayed multimedia information and/or the biological characteristic information of the user; and determining the scene type and the user state according to the multimedia information and/or the biological characteristic information of the user. Here, the multimedia information refers to information displayed on a screen interface of the mobile terminal by a foreground application running in the mobile terminal, and the mobile terminal may obtain the currently displayed multimedia information in a screen capture mode or the like. The biological characteristic information of the user can include physiological characteristic information and can also include vital sign information, the physiological characteristic information includes fingerprint, iris, face equal, the vital sign information includes heartbeat, blood pressure etc. and parts such as fingerprint collection sensor, camera that mobile terminal accessible self set up correspond and acquire physiological characteristic information to can correspond through the vital sign information of the user that the wearing formula equipment that receives the relevance sent and acquire vital sign information. The currently displayed multimedia information can represent the current environment of the user under certain conditions, for example, when the multimedia information is a photographing interface and the photographing interface displays that seawater is around a photographed object, the current scene type can be determined to be outdoor, and the current scene type can be further determined to be seaside. Furthermore, the multimedia information may also characterize the application currently being used by the user, and thus the user status may also be determined from the multimedia information. For example, when the currently displayed multimedia information is a game operation interface, determining that the current user state is game playing; when the currently displayed multimedia information is the WeChat friend circle interface, determining that the current user state is the WeChat use state; and when the multimedia information displayed currently is a photographing interface, determining that the current user state is photographing.
In addition, because the physiological characteristic information of the user, such as fingerprints, irises and the like, is unique, the current user state can be obtained by comparing the physiological characteristic information of the current user of the mobile terminal with the physiological characteristic information of the mobile terminal user which is stored in advance. For example, when the comparison of the above information fails, it is described that the current user state is a man-machine separation state. The body posture of the user can be represented by the vital sign information of the user, such as heartbeat information, college information and the like, so that the current user state can be obtained by comparing the vital sign information of the heartbeat information, the blood pressure information and the like with the vital sign information of the heartbeat information, the blood pressure information and the like corresponding to different body postures. For example, if the heartbeat range corresponding to the user state is in the sitting state is 50 to 95 times/minute, it can be determined that the current user state is in the sitting state if the current heartbeat of the user is in the above range. It should be noted that, when it is determined that the current user state is sleeping according to the vital sign information of the user, it may be determined that the current scene type is indoor, and it may be further determined that the current scene type is bedroom.
It should be noted that, since the scene type and/or the user state may not be accurately determined based on the multimedia information or the biometric information of the user alone in some cases, the multimedia information and the biometric information of the user need to be combined to accurately determine the scene type and/or the user state. For example, taking a mobile terminal as a mobile phone as an example, when a game interface is currently displayed on a screen interface of the mobile phone, since a user may be watching a game video or playing a game at the time, that is, the current user state cannot be accurately determined only according to the currently displayed game interface, if vital sign information of the user, such as heartbeat and/or blood pressure, is in a stable state, it is indicated that the current user state is watching the game video; if the vital sign information of the user, such as heartbeat and/or blood pressure, is in a fluctuating state, the current user state is indicated as game playing. Here, the scene type and/or the user state may also be determined based on spatial sound information, spatial image information, the multimedia information, and biometric information of the user in the environment information. Therefore, the scene type and/or the user state can be accurately judged according to the multimedia information currently displayed by the mobile terminal and/or the biological characteristic information of the user, and the response accuracy is further improved.
In one embodiment, the determining the user state according to the spatial sound information in the environmental information and/or the biometric information of the user includes:
and identifying the spatial sound information in the environment information and/or the biological characteristic information of the user, and determining whether the user state is a man-machine separation state according to the obtained identification result.
It can be understood that, since the voiceprint of each person is unique, the mobile terminal can identify the spatial sound information in the current environment information to determine whether the spatial sound information contains the stored voiceprint of the mobile terminal user, and if not, the current user state is the man-machine separation state. In addition, since the fingerprint, iris and other biometric information of the user are unique, the current user state can be known by comparing the fingerprint, iris and other information of the current user of the mobile terminal with the prestored fingerprint, iris and other information corresponding to the user of the mobile terminal. For example, when the comparison of the fingerprint or iris information of the current user of the mobile terminal fails, the current user state is the man-machine separation state. Here, the current user state may also be determined based on spatial sound information, spatial image information, the multimedia information, and biometric information of the user in the environment information. Therefore, the user state can be accurately judged through the spatial sound information and/or the biological characteristic information in the environmental information, so that the response accuracy is further improved.
In one embodiment, the determining the user state according to the spatial sound information in the environment information includes:
and acquiring the number and the volume of different timbres according to the spatial sound information in the environment information, and determining whether the user state is a party state or not according to the number and the volume of the different timbres.
It can be understood that, because the spatial sound information records the sound information in the environment where the mobile terminal is located, the sound information may only include the sound information sent by the mobile terminal user, and may also include the sound information sent by other users, and by performing identification processing on the spatial sound information, the information such as the number of different timbres and the volume included in the spatial sound information may be obtained, because the timbres of each person are different, the number of people in the environment where the mobile terminal is located may be known according to the number of different timbres, and when the number of different timbres exceeds the set number threshold and the volume also exceeds the set volume threshold, the current user state is a party state. For example, when the number of different timbres in the spatial sound information is identified to exceed 5 persons and the volume is greater than 60 decibels, the current user state is indicated as a party state. Therefore, the user state can be accurately judged through the spatial sound information in the environmental information, so that the response accuracy is further improved.
In one embodiment, the spatial image information includes an image of a face of a user, and the determining the user state according to spatial sound information and/or spatial image information and/or multimedia information in the environment information and/or biometric information of the user includes:
and performing expression recognition on the image of the user face, and determining the user state according to the obtained expression recognition result.
It can be understood that, since the expression of the user's face can represent the emotional state of the user, such as a thinking state, a sleeping state, etc., the current user state can be determined according to the recognition of the image of the user's face. Here, the mobile terminal may pre-establish an expression recognition model, input images of the user's face with different expressions as models, and output corresponding expressions as models, so as to train the expression recognition model, so that the image of the user's face to be recognized may be directly input into the expression recognition model for recognition during subsequent recognition, thereby obtaining corresponding expressions, and further determining the user state. Therefore, expression recognition is carried out on the image of the face of the user, the state of the user can be accurately acquired, and the response accuracy is further improved.
In one embodiment, the obtaining the scene type includes: acquiring a scene and/or a user state set by a user; and determining the scene type according to the scene set by the user and/or the user state. It is understood that the scene may be a specific location, a functional mode, and the like. In some cases, the user may prefer to custom set the scene where the mobile terminal is located, specifically, one scene may be selected from the set scene list, and the type of the scene where the user is actually located may not be consistent with the scene required by the user state. For example, if the user sets the scene of the mobile phone to be a stadium and the user state is running, it may be determined that the corresponding scene type is outdoor. If the user sets the scene of the mobile phone to be a conference mode and the user state is a meeting, the corresponding scene type can be determined to be a conference room. If the user sets the scene of the mobile phone to be in a mute mode and the user state is sleeping, the corresponding scene type can be determined to be indoor. Therefore, the scene type is determined according to the scene set by the user and the user state, the scene type can be accurately determined, and the user experience is further improved.
Step S102: and setting the working mode or state of the mobile terminal according to the scene type and/or the user state and/or the user habit.
It should be noted that the setting of the operating mode or state of the mobile terminal according to the scene type and/or the user state and/or the user habit may include setting at least one of a display parameter, a sound parameter, an electric quantity mode, and a function control of the mobile terminal. The display parameters can include display content, display effect, display color, display mode and the like; the sound parameters can include ringing, vibration, silence, flight mode, do-not-disturb mode, sound magnitude, vibration frequency, music tempo, etc.; the electric quantity mode can comprise a common mode, a power saving mode, a super power saving mode and the like; the function control may include switching of dual cards, switching of bluetooth mode, application preloading, deep color mode on or off, sleep mode on or off, night mode on or off, etc. For example, if the scene type of the user is home and the user state is watching a movie, the mobile phone of the user may be set to the entertainment mode; if the scene of the user is a stadium and the user state is running, the application related to the sports can be started on the mobile phone of the user, and the instant messaging application is closed; if the user can change the mode of the mobile phone to the mute mode after eleven nights, the mobile phone can be actively switched to the mute mode after the current time reaches eleven nights; if the scene type of the user is an office and the user likes to set the mobile phone into the vibration mode when the user is in the office, the mobile phone of the user can be set into the vibration mode; if the mobile phone of the user is set to be in an alarm mode and the user is used to go to nap between 13:00 and 14:00, the mobile phone can be set to be in a mute mode at 13: 00; if the user is at home and watching the video by using the mobile phone, and the user is used to watch the video with loud sound, the output volume of the mobile phone can be automatically increased; if the mobile phone of the user is set to be in an outdoor mode, the user is driving a vehicle at the moment, and the user likes listening to songs when driving the vehicle, the Bluetooth of the mobile phone and the music player on the vehicle can be automatically turned on at the moment so as to play music stored in the mobile phone through the vehicle; if the user likes to listen to songs while running, the music application on the mobile phone is automatically opened when the user is detected to run.
In an embodiment, the setting of the operating mode or state of the mobile terminal according to the scene type and/or the user state and/or the user habit includes: and responding to a preset event according to the scene type and/or the user state and/or the user habit.
Specifically, after monitoring a preset event, the mobile terminal queries a corresponding response mode according to the scene type and/or the user state and/or the user habit acquired in step S101 and the preset event, and responds to the preset event according to the queried response mode, so as to provide corresponding contextual feedback to the user, where the scene type and/or the user state and a corresponding relationship between the preset event and the response mode are pre-stored in the mobile terminal. For example, taking the mobile terminal as a mobile phone as an example, assuming that the preset event is an incoming call, and the response mode is that the breathing lamp flickers, the mobile phone controls the breathing lamp to flicker, so as to send out a prompt feedback.
Here, the preset event may be set according to actual needs, and the preset event may specifically be an incoming call, a short message, an alarm clock, a schedule, a notification push, and the like, and in addition, the preset event may also be an event such as finding a mobile phone, a media switch, and the like. Correspondingly, the monitoring of the preset event may be receiving an incoming call, a short message, or a notification push message, or may also be detecting that an alarm clock is triggered, a schedule activity is triggered, or the like. According to the difference and the different response demands of presetting the incident, the response mode can specifically be breathing lamp scintillation, vibration, ringing + vibration, ringing + voice broadcast, vibration + voice broadcast etc.. It can be understood that the mobile terminal may provide a setting interface regarding the scene type and/or the user state and the corresponding relationship between the preset event and the response mode, and the user may set or adjust the scene type and/or the user state and the corresponding relationship between the preset event and the response mode on the setting interface as required, for example, when the scene type is outdoor, the user state is running, and the preset event is an incoming call, the corresponding response mode is set to ring; and when the scene type is outdoor, the user state is sitting on the bus, and the preset event is an incoming call, setting the corresponding response mode to be vibration and the like. It should be noted that, the user may also input an instruction in the setting interface through a voice manner, so as to set or adjust the corresponding relationship between the scene type, the user state, the preset event, and the response manner. It should be noted that, a preset event may be responded separately according to the scene type or the user state, for example, when the scene type is outdoor and the preset event is an incoming call, the corresponding response mode may be ringing; when the user state is sitting in the car and the preset event is an incoming call, the corresponding response mode can be vibration and the like.
In an embodiment, the responding to a preset event according to the scene type and/or the user state and/or the user habit includes:
when the preset event is a first preset event, inquiring a first response mode according to the scene type, the user state and the first preset event, and responding to the preset event according to the first response mode;
when the preset event is a second preset event, inquiring a second response mode according to the scene type, the user state and the second preset event, and responding to the preset event according to the second response mode;
wherein the first preset event is different from the second preset event.
It can be understood that the mobile terminal stores in advance a scene type, a user state, and a corresponding relationship between a preset event and a response mode, so that the corresponding response mode can be obtained by querying according to the scene type, the user state, and the preset event. The first preset event is different from the second preset event, and may be that the first preset event is an incoming call, the second preset event is a short message, the first preset event is a short message, the second preset event is an incoming call, or the first preset event is an alarm clock, the second preset event is notification push, and the like. For the same preset event, the corresponding response modes may be different due to different scene types and user states. For example, assuming that the first preset event is an incoming call, when the scene type is a conference room and the user state is a listening state, the corresponding response mode may be that the breathing lamp flickers; when the scene type is a conference room and the user state is a conference-opening discussion state, the corresponding response mode may be vibration; when the scene type is a conference room and the user state is a conference opening conversation state, the corresponding response mode may be ringing. In addition, when the first preset event is different from the second preset event, the first response mode and the second response mode may be the same or different. For example, assuming that the first preset event is an incoming call and the second preset event is a short message, if the scene types are conference rooms and the user states are listening states, the corresponding first response mode and the corresponding second response mode may both be flashing of a breathing light, or the first response mode may be flashing of a breathing light and the second response mode may be vibration. Therefore, for different preset events, the corresponding response mode is set according to the scene type and the user state, the flexibility of the response mode is improved, and the user experience is improved.
In an embodiment, the responding to a preset event according to the scene type and/or the user state and/or the user habit includes:
acquiring the grade of a preset event according to the content of the preset event;
when the grade of the preset event is a first preset grade, inquiring a third response mode according to the scene type, the user state, the preset event and the first preset grade, and responding to the preset event according to the third response mode;
when the grade of the preset event is a second preset grade, inquiring a fourth response mode according to the scene type, the user state, the preset event and the second preset grade, and responding to the preset event according to the fourth response mode;
wherein the third response mode is different from the fourth response mode.
It can be understood that for the same kind of events with different importance degrees, such as a common contact call and an important contact call, if the events are responded in the same response mode, special reminding may not be realized, and the user experience is reduced. Therefore, different preset levels can be set for events with different importance degrees, namely events with different contents, according to needs in advance, and the corresponding relation among the scene type, the user state, the preset events, the preset levels and the response mode can be established. Taking a mobile terminal as a mobile phone, a preset event as an incoming call, a scene type as an indoor state, and a user state as sleep as examples, assuming that a family incoming call is taken as an incoming call of an important contact, a friend incoming call is taken as an incoming call of a common contact, that is, a preset level of the family incoming call is set as a first preset level, a preset level of the friend incoming call is set as a second preset level, and a third response mode is set as ringing and the ringing is gradually increased, wherein a fourth response mode is that a breathing lamp flashes, when the user sleeps indoors, the mobile phone receives the incoming call, and if the mobile phone learns that the family incoming call is received according to a calling number of the incoming call, the mobile phone rings and the ringing is gradually increased; if the mobile phone knows that the mobile phone is a friend incoming call according to the calling number of the incoming call, the mobile phone controls the breathing lamp to flash. It should be noted that, for different types of events, such as calls and short messages, different levels can be set and responses can be performed according to corresponding response modes. Therefore, the corresponding response mode is obtained according to the scene type, the user state, the preset event and the preset grade corresponding to the preset event, so that the events with different preset grades are responded through different response modes, and the user experience is improved.
In summary, in the response method provided in the above embodiment, the mobile terminal automatically sets the working mode or state of the mobile terminal according to the scene type and/or the user state and/or the user habit, so that an adaptive response to the working mode or state of the mobile terminal according to the scene type and/or the user state and/or the user habit is achieved, and the user experience is improved. Meanwhile, the user does not need to manually set or switch the contextual model in advance according to the scene, and the self-adaptive response can be realized only by opening the intelligent contextual model, so that the requirement can be controlled more accurately, and the bad trouble caused by the cross setting aiming at different applications is reduced.
In an embodiment, the method may further comprise:
acquiring the state of equipment associated with the mobile terminal;
the setting of the working mode or state of the mobile terminal according to the scene type and/or the user state and/or the user habit includes:
and setting the working mode or state of the mobile terminal or the equipment according to the scene type and/or the user state and/or the user habit and/or the state of the equipment associated with the mobile terminal.
It can be understood that after the mobile terminal is connected with a device, namely, associated with the device, a user can conveniently control the device through the mobile terminal, and therefore operation convenience is improved. Here, the state of the device may refer to whether the device is turned off or on, or the like. For example, if it is determined that the user will enter the meeting room according to the scene type and the user state, the projection device associated with the mobile phone of the user may be turned on first, so that the user can use the projection device after entering the meeting room, and it may be determined whether the state of the projection device is available before turning on the associated projection device; if the user is accustomed to watching the television at five pm every day, automatically turning on a television associated with the mobile phone of the user when the user is determined to be at home and the time is up; if the user is determined to be watching the television by using the television box according to the scene type, the user state and the use of the associated equipment, the output volume of the mobile terminal can be increased; determining that the user enters the vehicle according to the scene type and the user state, controlling the mobile phone to enter a navigation mode, and connecting an in-vehicle navigator; if the user is determined to enter the library according to the scene type and the user state and the sound of the associated Bluetooth watch is turned on, the mute vibration function can be turned on when the user enters the library according to the habit, and the terminal and the associated watch are adjusted to be in a mute mode and the like. Therefore, the working mode or state of the mobile terminal or the equipment is set according to the scene type and/or the user state and/or the user habit and/or the state of the equipment associated with the mobile terminal, so that the self-adaptive response to the working mode or state of the mobile terminal or the equipment according to the scene type and/or the user state and/or the user habit and/or the state of the equipment associated with the mobile terminal is realized, and the user experience is further improved.
In one embodiment, the method further comprises: receiving an adjusting instruction input by a user; and adjusting the working mode or state of the mobile terminal after setting according to the adjusting instruction. It can be understood that after the working mode or state of the mobile terminal is set according to the scene type and/or the user state and/or the user habit, the working mode or state after the setting of the mobile terminal may not be really needed by the user, and at this time, the user can adjust the working mode or state after the setting of the mobile terminal, so that the working mode or state after the setting of the mobile terminal meets the user requirement, and the user experience is further improved.
Based on the same inventive concept of the foregoing embodiments, an embodiment of the present invention provides a mobile terminal, as shown in fig. 3, including: a processor 110 and a memory 111 for storing computer programs capable of running on the processor 110; the processor 110 illustrated in fig. 3 is not used to refer to the number of the processors 110 as one, but is only used to refer to the position relationship of the processor 110 relative to other devices, and in practical applications, the number of the processors 110 may be one or more; similarly, the memory 111 illustrated in fig. 3 is also used in the same sense, that is, it is only used to refer to the position relationship of the memory 111 relative to other devices, and in practical applications, the number of the memory 111 may be one or more. The processor 110 is configured to implement the response method applied to the mobile terminal when running the computer program.
The mobile terminal may further include: at least one network interface 112. The various components in the mobile terminal are coupled together by a bus system 113. It will be appreciated that the bus system 113 is used to enable communications among the components. The bus system 113 includes a power bus, a control bus, and a status signal bus in addition to the data bus. For clarity of illustration, however, the various buses are labeled as bus system 113 in FIG. 3.
The memory 111 may be a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memories. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Random Access Memory (DRMB), and Random Access Memory (DRAM). The memory 111 described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
The memory 111 in the embodiments of the present invention is used to store various types of data to support the operation of the mobile terminal. Examples of such data include: any computer program for operation on the mobile terminal, such as operating systems and application programs; contact data; telephone book data; a message; a picture; video, etc. The operating system includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application programs may include various application programs such as a Media Player (Media Player), a Browser (Browser), etc. for implementing various application services. Here, the program that implements the method of the embodiment of the present invention may be included in an application program.
Based on the same inventive concept of the foregoing embodiments, this embodiment further provides a computer storage medium, where a computer program is stored in the computer storage medium, where the computer storage medium may be a Memory such as a magnetic random access Memory (FRAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash Memory (flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read Only Memory (CD-ROM), and the like; or may be a variety of devices including one or any combination of the above memories, such as a mobile phone, computer, tablet device, personal digital assistant, etc. The computer program stored in the computer storage medium, when executed by a processor, implements the response method applied to the mobile terminal described above. Please refer to the description of the embodiment shown in fig. 1 for a specific step flow realized when the computer program is executed by the processor, which is not described herein again.
Based on the same inventive concept of the foregoing embodiments, the present embodiment describes in detail the technical solutions of the foregoing embodiments by a specific example. In this embodiment, the mobile terminal is taken as an example of a mobile phone. In the prior art, the switching of the profile mainly includes the following two modes: one way is to manually switch the profiles, as shown in fig. 4, most existing smart phone profiles are classified into vibration, ringing, muting, do not disturb, and the like, and all rely on the user to manually click to switch the profiles. However, the manual switching of the scene mode has the following drawbacks: firstly, users need to change frequently according to different environments, the operation frequency is high, and if the users do not switch in time, wrong response is sent due to mode mismatching, so that not only work is influenced, but also other people are influenced; secondly, for different events needing feedback, such as schedules, alarm clocks, incoming calls, media and the like, each application has a corresponding mode setting, and the modes are various and disordered, so that in some cases, the feedback about which application should be sent is unknown. The setting of the contextual model of the alarm clock, the setting of the contextual model of the incoming call and the setting of the contextual model of the notification push are independent. The other mode is contextual model conversion, when incoming call information of a terminal such as a mobile phone is detected, whether a contact corresponding to the incoming call information is an important contact is judged, if yes, the current contextual model and the current position information of the terminal are obtained, and when the mode set by a user does not accord with the current environment, the mode conversion is carried out. However, the profile switching has the following problems: the method solves some problems of fixed scenes by judging whether the mode set by the current user needs to be converted or not according to the importance degree and the position of the contact person, but the method cannot be used when the user does not need to call the incoming call application, namely when the user needs another contextual mode, the mode which is passively converted in the past needs to be updated again, the situation that the contextual mode A is manually set by the user for a long time is caused, the machine judges that the contextual mode B is changed into the contextual mode B, and the infinite loop unfavorable mode of the contextual mode A is manually set by the user.
Therefore, the embodiment of the invention provides a response method, which mainly takes people as a center, combines the environment of the mobile phone and the position relationship between people and the mobile phone, synthesizes all the complete machine feedback of the mobile phone, which can relate to each application, gives an intelligent result output form and is based on the opening of a button. Fig. 5 is a schematic specific flowchart of a response method according to an embodiment of the present invention, where the response method includes the following steps:
step S201: opening an intelligent contextual model;
referring to fig. 2 again, when the user turns on the switch of the smart profile, the mobile phone turns on the smart profile. If the intelligent contextual model is not started, the conventional contextual model is observed.
Here, when the user uses the smart contextual model function, the mobile phone may gather some information about the relationship between the user and the mobile phone, such as through voice recording or human infrared recording, to prove that the user is the owner of the mobile phone.
Step S202: acquiring user state and position information;
here, the location information needs to be manually interworked by the user, such as inputting the location of the user or a topographic map and a user's relative location of the company, in addition to the range that the map can judge, and may include indoor, outdoor, meeting room, office, classroom, road, bedroom, bus, mall, and the like. The user state can be determined by collecting the tone, the number of recognized people, the sound decibel, the heartbeat or the emotional state of the user, or other factors around the user according to the recording of the mobile phone, and the like. The user status may include sleeping, thinking, in a meeting, in a party, running, cell phone detached from the user, etc.
Step S203: monitoring a user event;
here, since different applications may have different events, the corresponding user events, such as incoming calls, short messages, etc., may be obtained through monitoring.
Step S204: matching according to the user state, the position information and the user event to obtain a corresponding contextual model;
here, the mobile phone may perform matching with the profile in the preset profile table according to the user state, the location information, and the user event by reading the preset profile table, so as to obtain a matched profile.
TABLE 1
Figure BDA0002476250890000181
It should be noted that the mobile phone may compare and adjust the contextual model table according to the user event in real time to realize intelligent identification of the contextual model, as shown in table 1, the contextual model table is a contextual model table corresponding to the incoming call event, where the contextual feedback represents a prompt feedback generated by the mobile phone according to the corresponding contextual model. Here, the mobile phone may analyze the scenarios required by different applications related to the whole mobile phone according to the comprehensively collected factors, and the required scenarios may be: incoming calls, message reminders, alarm clocks, calendar events, notification pushes, finding cell phones, media switches, etc. And reasonable, humanized and intelligent feedback modes are set according to the input scene environment and the required prompt, such as a breathing lamp, a flash lamp, vibration, ringing, muting, alarming, media closing and the like. Optionally, the mobile phone obtains user habit feedback for different events under different situations according to the user habit data, and when the mobile phone determines that the user performs corresponding situations, the mobile phone does not need to perform scenario feedback based on a preset scenario mode table, but performs scenario feedback based on the habit data.
In addition, the mobile phone can further read the importance degrees of user short message information, contact importance degrees, alarm clock schedules and the like according to an intelligent algorithm, make intelligent judgment and perform special reminding. In addition, manual intervention of a user is added, for unreasonable contextual models, the user can inform the mobile phone in a setting or voice mode, so that memory and learning of the mobile phone are developed, corresponding processing operation is executed at corresponding time, and self-adaptive contextual models are realized to the maximum extent.
Step S205: and outputting event information, and sending intelligent reminding feedback according to the contextual model.
The method comprises the steps that all scenes which can be related to a mobile phone are reminded and integrated by taking human states and equipment positions as input, and finally feedback matched with user behavior states is sent out, wherein the human states and the equipment positions are centered on human; through a database or a preset contextual model table, manual control of a user is reduced, the reminding of the emotion and the taste of the user is increased, all people serve, and an intelligent reminding mode is adopted.
In conclusion, the response method can liberate manual setting of the user, changes the manual setting into an intelligent contextual model switch, more accurately controls the requirement, and improves the user experience; different settings of different applications are reduced, a switch and a key are integrated to start an intelligent contextual model, and bad troubles caused by cross setting are reduced, for example, ringing is set on an alarm clock in part of mobile phones, but the ringing is delayed if the whole mode is mute.
Based on the same inventive concept of the foregoing embodiments, the present embodiment further describes in detail the technical solutions of the foregoing embodiments through a specific example. The embodiment of the invention provides an intelligent response method, which is mainly used for setting the working mode or state of a mobile terminal based on scene types, user states and/or habits, and particularly can be used for comprehensively feeding back all complete machines of the mobile terminal, which can relate to each application, based on positions, spaces, equipment environments, position relations between people and equipment and multimedia factors by a human center to provide an intelligent result output form. The method can be applied to mode switching or state opening in the Internet fields such as mode response of the mobile terminal, associated equipment and the like based on the opening of an intelligent contextual model function button.
Firstly, the intelligent contextual model function is started on a certain device, and the device can be a mobile phone, other terminal devices associated with the mobile phone, and other devices capable of intelligently controlling independent response, such as a vehicle, a conference device, and a television box.
Second, the device obtains relevant information. When the intelligent contextual model function is used, the device needs to collect some position, space information and displayed multimedia information about the environment; or collecting the biological characteristic information of the user, namely the user state, including sleeping, thinking, in meeting, in gathering, running, in separation of the mobile phone from the user, how long the mobile phone is separated from the user, and the like; or collecting the daily use habits of the user, namely the particularity and cognitive level of what the user is accustomed to doing in what scene, and the like, and the daily use habits can be recorded through sound, human body infrared recording and the like; or according to the sound quality of the surrounding people collected by the recording of the mobile phone, the number of people identified, the sound decibel, the heartbeat or emotional state of the user, other surrounding factors and the like, the user is probably judged to be about how much at the position.
And then, determining the scene type according to the acquired information, wherein the scene type mainly comprises an automatically acquired scene and a scene manually set by a user. The intelligently acquired scene environments are: accurate location information (e.g., indoor, outdoor, conference room, office, classroom, road, bedroom, bus, mall, etc.); the user may set the scenario as: and setting the working mode or state of the mobile phone/associated equipment, such as a conference mode, a sport mode, a sleep mode, a vehicle starting state, a conference equipment starting state, a television box opening state and the like.
Finally, the mode switching and the response to the event to be responded are rationalized by judging the condition through any one or any two or three combinations of the scene type, the user state, the habit and the state of the associated equipment. For example: and executing the contextual model switching of mobile phone ringing according to the user state, and increasing the volume of the television box according to the habit of the user so as to realize self-adaptive matching. The mode switching and responding may specifically include: changing the output of display parameters (such as changing display content, display mode, etc.); changing sound parameters (e.g., ringing, vibration, silence, flight mode, do-not-disturb mode, sound size, vibration frequency, music tempo); changing the power mode (such as changing from a common mode to a power saving mode, a super power saving mode and the like according to requirements, and also switching a dual card and turning off a Bluetooth mode); function on and off (application pre-load, dark mode on, sleep mode on and off).
In addition, the method can further read the importance degrees of user short message information, contact importance degrees, alarm clock schedules and the like, make intelligent judgment and specially remind. In addition, manual intervention of a user is added, which mode is unreasonable, the user can tell the equipment at set or through voice, the equipment is trained to memorize and learn, corresponding processing operation is executed at corresponding time, and the self-adaptive contextual model is realized to the maximum extent.
The above example has the following advantages: the manual setting of a user can be liberated, and an intelligent contextual model switch is changed, so that the requirement is controlled more accurately, and the user experience is improved; meanwhile, different settings of different applications are reduced, a switch and a one-key-on intelligent contextual model are integrated, and bad troubles caused by cross setting are reduced (for example, ringing is set on part of mobile phones and mobile phone alarm clocks, but ringing is delayed if the whole mode is mute).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
As used herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, including not only those elements listed, but also other elements not expressly listed.
It should be understood that, although the respective steps in the flowcharts in the above-described embodiments are sequentially shown as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (15)

1. A response method is applied to a mobile terminal, and is characterized by comprising the following steps:
acquiring a scene type and/or a user state and/or a user habit;
and setting the working mode or state of the mobile terminal according to the scene type and/or the user state and/or the user habit.
2. The response method according to claim 1, wherein the obtaining of the scene type and/or the user status comprises:
acquiring environment information and/or displayed multimedia information and/or biological characteristic information of a user, wherein the environment information comprises at least one of position information, spatial image information and spatial sound information;
determining a scene type according to position information and/or spatial image information and/or multimedia information and/or biological feature information of the user in the environment information;
and determining the user state according to the spatial sound information and/or the spatial image information and/or the multimedia information and/or the biological characteristic information of the user in the environment information.
3. The response method according to claim 2, wherein the determining a scene type according to the position information and/or the spatial image information and/or the multimedia information and/or the biometric information of the user in the environment information comprises:
and inquiring the corresponding relation between the set position information and the scene type according to the position information in the environment information, and acquiring the scene type corresponding to the position information in the environment information.
4. The response method according to claim 2, wherein the determining the user state according to the spatial sound information and/or the spatial image information and/or the multimedia information and/or the biometric information of the user in the environment information comprises:
identifying spatial sound information in the environment information and/or biological characteristic information of the user, and determining whether the user state is a man-machine separation state according to an obtained identification result; alternatively, the first and second electrodes may be,
and acquiring the number and the volume of different timbres according to the spatial sound information in the environment information, and determining whether the user state is a party state or not according to the number and the volume of the different timbres.
5. The response method according to claim 2, wherein the spatial image information comprises an image of a face of the user, and the determining the user state based on the spatial sound information and/or the spatial image information and/or the multimedia information and/or the biometric information of the user in the environment information comprises:
and performing expression recognition on the image of the user face, and determining the user state according to the obtained expression recognition result.
6. The response method of claim 1, wherein the obtaining the scene type comprises:
acquiring a scene and/or a user state set by a user;
and determining the scene type according to the scene set by the user and/or the user state.
7. The response method according to any one of claims 1 to 6, wherein the setting of the operation mode of the mobile terminal according to the scene type and/or the user status and/or the user habit comprises:
and setting at least one of display parameters, sound parameters, electric quantity modes and function control of the mobile terminal according to the scene type and/or the user state and/or the user habit.
8. The response method according to claim 1, wherein the setting of the operation mode or state of the mobile terminal according to the scene type and/or the user state and/or the user habit comprises:
and responding to a preset event according to the scene type and/or the user state and/or the user habit.
9. The response method according to claim 8, wherein the responding to a preset event according to the scene type and/or the user state and/or the user habit comprises:
when the preset event is a first preset event, inquiring a first response mode according to the scene type, the user state and the first preset event, and responding to the preset event according to the first response mode;
when the preset event is a second preset event, inquiring a second response mode according to the scene type, the user state and the second preset event, and responding to the preset event according to the second response mode;
wherein the first preset event is different from the second preset event.
10. The response method according to claim 8, wherein the responding to a preset event according to the scene type and/or the user state and/or the user habit comprises:
acquiring the grade of a preset event according to the content of the preset event;
when the grade of the preset event is a first preset grade, inquiring a third response mode according to the scene type, the user state, the preset event and the first preset grade, and responding to the preset event according to the third response mode;
when the grade of the preset event is a second preset grade, inquiring a fourth response mode according to the scene type, the user state, the preset event and the second preset grade, and responding to the preset event according to the fourth response mode;
wherein the third response mode is different from the fourth response mode.
11. The response method according to claim 1, wherein before acquiring the scene type and/or the user status and/or the user habit, the method further comprises:
detecting whether the intelligent contextual model is in an open state;
and when the intelligent contextual model is detected to be in the opening state, executing the step of acquiring the scene type and/or the user state and/or the user habit.
12. The response method of claim 1, further comprising:
acquiring the state of equipment associated with the mobile terminal;
the setting of the working mode or state of the mobile terminal according to the scene type and/or the user state and/or the user habit includes:
and setting the working mode or state of the mobile terminal or the equipment according to the scene type and/or the user state and/or the user habit and/or the state of the equipment associated with the mobile terminal.
13. The response method of claim 1, further comprising:
receiving an adjusting instruction input by a user;
and adjusting the working mode or state of the mobile terminal after setting according to the adjusting instruction.
14. A mobile terminal, comprising: a processor and a memory for storing a computer program capable of running on the processor,
wherein the processor, when running the computer program, implements the response method of any one of claims 1 to 13.
15. A computer storage medium, characterized in that a computer program is stored which, when executed by a processor, implements the response method according to any one of claims 1 to 13.
CN202010364445.3A 2019-06-04 2020-04-30 Response method, mobile terminal and computer storage medium Pending CN111371955A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019104804404 2019-06-04
CN201910480440.4A CN110365835A (en) 2019-06-04 2019-06-04 A kind of response method, mobile terminal and computer storage medium

Publications (1)

Publication Number Publication Date
CN111371955A true CN111371955A (en) 2020-07-03

Family

ID=68215013

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910480440.4A Pending CN110365835A (en) 2019-06-04 2019-06-04 A kind of response method, mobile terminal and computer storage medium
CN202010364445.3A Pending CN111371955A (en) 2019-06-04 2020-04-30 Response method, mobile terminal and computer storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201910480440.4A Pending CN110365835A (en) 2019-06-04 2019-06-04 A kind of response method, mobile terminal and computer storage medium

Country Status (1)

Country Link
CN (2) CN110365835A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111885573A (en) * 2020-07-29 2020-11-03 广州小鹏车联网科技有限公司 Intelligent cabin interaction method and intelligent cabin
CN115495040A (en) * 2022-10-09 2022-12-20 湖南捷力泰科技有限公司 Scene type-based sound control method and related device
CN115720248A (en) * 2021-08-27 2023-02-28 荣耀终端有限公司 Sound mode switching method and electronic equipment

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112995409B (en) * 2019-12-02 2022-03-11 荣耀终端有限公司 Display method of intelligent communication strategy effective scene, mobile terminal and computer readable storage medium
CN112820278A (en) * 2021-01-23 2021-05-18 广东美她实业投资有限公司 Household doorbell automatic monitoring method, equipment and medium based on intelligent earphone

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104301534A (en) * 2014-10-09 2015-01-21 广东小天才科技有限公司 Method and device for intelligently adjusting scene mode of mobile terminal

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104301534A (en) * 2014-10-09 2015-01-21 广东小天才科技有限公司 Method and device for intelligently adjusting scene mode of mobile terminal

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111885573A (en) * 2020-07-29 2020-11-03 广州小鹏车联网科技有限公司 Intelligent cabin interaction method and intelligent cabin
CN115720248A (en) * 2021-08-27 2023-02-28 荣耀终端有限公司 Sound mode switching method and electronic equipment
CN115495040A (en) * 2022-10-09 2022-12-20 湖南捷力泰科技有限公司 Scene type-based sound control method and related device

Also Published As

Publication number Publication date
CN110365835A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
CN111371955A (en) Response method, mobile terminal and computer storage medium
US9344815B2 (en) Method for augmenting hearing
US20090170552A1 (en) Method of switching profiles and related mobile device
US20160196108A1 (en) Method for augmenting a listening experience
CN107580113B (en) Reminding method, device, storage medium and terminal
CN105933539B (en) audio playing control method and device and terminal
CN105975241A (en) Volume regulation method and device
EP1895745A1 (en) Method and communication system for continuous recording of data from the environment
CN105306752B (en) The method and device that generation event is reminded
CN110349578A (en) Equipment wakes up processing method and processing device
CN106067996A (en) Voice reproduction method, voice dialogue device
CN105898573A (en) Method and device for multimedia file playing
CN110418011B (en) Method and device for generating prompt tone, intelligent equipment and storage medium
US20240061881A1 (en) Place Search by Audio Signals
CN104486489B (en) Export the method and device of call background voice
CN109246184A (en) A kind of temporal information acquisition methods, device and readable storage medium storing program for executing
CN106599101B (en) System and method for automatically playing audio and video files
CN106101441A (en) Terminal control method and device
CN112866480B (en) Information processing method, information processing device, electronic equipment and storage medium
CN111742538A (en) Reminding method and device and electronic equipment
CN111108550A (en) Information processing device, information processing terminal, information processing method, and program
CN111736798A (en) Volume adjusting method, volume adjusting device and computer readable storage medium
CN106713613B (en) Note display methods and device
CN111739528A (en) Interaction method and device and earphone
CN106533910B (en) Note display methods and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination