CN111897477A - Mobile terminal control method, mobile terminal and storage medium - Google Patents

Mobile terminal control method, mobile terminal and storage medium Download PDF

Info

Publication number
CN111897477A
CN111897477A CN202010773263.1A CN202010773263A CN111897477A CN 111897477 A CN111897477 A CN 111897477A CN 202010773263 A CN202010773263 A CN 202010773263A CN 111897477 A CN111897477 A CN 111897477A
Authority
CN
China
Prior art keywords
information
interaction mode
current
scene information
mobile terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010773263.1A
Other languages
Chinese (zh)
Other versions
CN111897477B (en
Inventor
邵刚
朱荣昌
梁文斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Chuanying Information Technology Co Ltd
Original Assignee
Shanghai Chuanying Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Chuanying Information Technology Co Ltd filed Critical Shanghai Chuanying Information Technology Co Ltd
Priority to CN202010773263.1A priority Critical patent/CN111897477B/en
Publication of CN111897477A publication Critical patent/CN111897477A/en
Application granted granted Critical
Publication of CN111897477B publication Critical patent/CN111897477B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Abstract

The application provides a mobile terminal control method, a mobile terminal and a storage medium. The method comprises the following steps: and if the current scene information of the mobile terminal meets the first preset condition, determining an interaction mode corresponding to the current scene information, and if the current operation information of the mobile terminal meets the second preset condition, operating the interaction mode. According to the method, whether the current scene information and the current operation information meet the preset conditions or not is judged, the interaction mode is automatically determined and operated under the condition that a user does not need to actively operate, the user can conveniently control the mobile terminal, the user experience is improved, meanwhile, the corresponding interaction mode is started when the user enters the current scene, the problem that the energy consumption is increased due to the fact that various interaction modes are started for a long time in different scenes is solved, and the energy consumption of the mobile terminal is reduced.

Description

Mobile terminal control method, mobile terminal and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method for controlling a mobile terminal, and a storage medium.
Background
With the development of mobile terminals, in order to meet the requirements of different users in different scenes, a plurality of different interaction modes are available for realizing the interaction between the users and the mobile terminals, so that the users can control the mobile terminals. For example: the interaction mode can be a touch mode, a stylus mode, a voice mode, a mode of controlling the telephone or adjusting the volume by using a headset, an air-separated operation mode and the like. The control of the mobile terminal is independently completed between different interaction modes.
If all the interaction modes on the mobile terminal are opened at the same time to receive the user operation, the power consumption of the mobile terminal is increased, the reflecting speed of the mobile terminal becomes slow, and even a pause phenomenon occurs, so that the existing mobile terminal cannot be opened at the same time to receive the user operation, and partial interaction modes can respond to the instruction of the interaction mode after the user is actively opened to be in an opening state.
However, the operation of manually starting the interactive mode of the mobile terminal by the user is cumbersome and the user experience is not good.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
The application provides a mobile terminal control method, a mobile terminal and a storage medium, which are used for solving the problems of complex operation and poor user experience of the conventional interactive mode for manually starting the mobile terminal by a user.
In a first aspect, the present application provides a method for controlling a mobile terminal, including:
if the current scene information of the mobile terminal meets a first preset condition, determining an interaction mode corresponding to the current scene information;
and if the current operation information of the mobile terminal meets a second preset condition, operating the interactive mode.
Optionally, the current scene information includes at least one of the following: current time information, current environment information, current user information and current terminal information;
alternatively to this, the first and second parts may,
the current time information comprises at least one of the following: daily time, operating time;
the current environment information includes at least one of: ambient sound, ambient brightness, movement information;
the current user information comprises at least one of the following: user type, distance information, fingerprint information, face information;
the current terminal information includes at least one of the following: terminal parameters, operational information, and attitude information.
Optionally, the current scene information satisfies a first preset condition, and includes at least one of the following:
the daily time is within a preset time period;
the operation time is within the operation time interval of the historical record;
the environmental sound is in a preset noise value interval;
the environment brightness is within a preset brightness value interval;
the movement information is in a preset movement speed interval;
the user type is a preset user type;
the distance information is in a preset human-computer distance interval;
the operation information is preset operation information;
the attitude information is a preset terminal attitude.
Optionally, the interaction mode includes at least one of: a driving mode, a motion mode, a touch mode, a voice mode, an old man mode, a child mode, and a safety mode.
Optionally, the current operation information includes at least one of the following: touch control operation, air separation operation and voice operation; the touch operation meets the operation type of the interaction mode, the air-separating operation meets the operation type of the interaction mode, and the voice operation meets the operation type of the interaction mode.
Optionally, the determining the interaction mode corresponding to the current scene information includes:
determining an interaction mode corresponding to the current scene information according to the corresponding relation between the scene information and the interaction mode; or the like, or, alternatively,
and determining an interaction mode corresponding to the current scene information according to a preset rule.
Optionally, the correspondence between the scene information and the interaction mode includes at least one of the following:
the corresponding relation between the scene information set by the mobile terminal from the factory and the interaction mode;
determining the corresponding relation between the scene information and the interaction mode according to the setting operation;
the method comprises the following steps of carrying out deep learning on historical scene information to obtain a corresponding relation between the scene information and an interaction mode, wherein the historical scene information comprises at least one of the following: operation mode, operation time, operation environment, terminal parameters and interaction mode.
Optionally, the determining, according to the correspondence between the scene information and the interaction mode, the interaction mode corresponding to the current scene information includes:
determining an interaction mode to be selected corresponding to the current scene information according to the corresponding relation between the scene information and the interaction mode;
if the number of the interaction modes to be selected is one, the interaction mode corresponding to the current scene information is the interaction mode to be selected; and/or the presence of a gas in the gas,
and if the number of the interaction modes to be selected is at least two, determining the interaction mode corresponding to the current scene information according to the priority of the at least two interaction modes to be selected.
Optionally, before determining the interaction mode corresponding to the current scene information according to the priorities of the at least two interaction modes to be selected, the method further includes:
and determining the priority of the at least two interaction modes to be selected according to the priority of the scene information corresponding to the at least two interaction modes to be selected.
Optionally, the determining, according to the priorities of the at least two interaction modes to be selected, an interaction mode corresponding to the current scene information includes:
determining the interaction mode to be selected corresponding to the current operation information, and controlling the mobile terminal to operate the corresponding interaction mode to be selected; or the like, or, alternatively,
and if the interaction mode to be selected corresponding to the current operation information does not exist, controlling the mobile terminal to run a default interaction mode or run an interaction mode corresponding to the operation information.
Optionally, if the mobile terminal receives a preset operation, after the interactive mode is run, the method further includes:
and if the preset operation is not received within the preset duration, the mobile terminal operates the current interaction mode, or the mobile terminal operates the interaction mode corresponding to the current scene information.
In a second aspect, the present application provides a method for operating a mobile terminal, including:
if the acquired current operation information for the mobile terminal meets a first preset condition, acquiring current scene information;
determining an interaction mode according to the current operation information and/or the current scene information;
and operating the interaction mode.
Optionally, the current operation information includes at least one of the following: touch control operation, air separation operation and voice operation;
the current operation information is a control operation on the mobile terminal, wherein the control operation includes at least one of the following: waking up, lighting up and controlling.
Optionally, the determining an interaction mode according to the current operation information and/or the current scene information includes:
if the first interaction mode corresponding to the current operation information is the same as the second interaction mode corresponding to the current scene information, determining that the interaction mode is the first interaction mode; or the like, or, alternatively,
and if the first interaction mode corresponding to the current operation information is different from the second interaction mode corresponding to the current scene information, determining the interaction mode according to a preset rule.
Optionally, the determining the interaction mode according to the preset rule includes at least one of the following:
determining the interaction mode according to the priority of a first interaction mode corresponding to the current operation information and the priority of a second interaction mode corresponding to the current scene information;
and determining the interaction mode according to the priority of the current operation information and the current scene information.
In a third aspect, the present application provides an operating device for a mobile terminal, including:
the mobile terminal comprises a determining module, a processing module and a processing module, wherein the determining module is used for determining an interaction mode corresponding to current scene information if the current scene information of the mobile terminal meets a first preset condition;
and the operation module is used for operating the interaction mode if the current operation information of the mobile terminal meets a second preset condition.
Optionally, the current scene information includes at least one of the following: current time information, current environment information, current user information and current terminal information;
alternatively to this, the first and second parts may,
the current time information comprises at least one of the following: daily time, operating time;
the current environment information includes at least one of: ambient sound, ambient brightness, movement information;
the current user information comprises at least one of the following: user type, distance information, fingerprint information, face information;
the current terminal information includes at least one of the following: terminal parameters, operational information, and attitude information.
Optionally, the current scene information satisfies a first preset condition, and includes at least one of the following:
the daily time is within a preset time period;
the operation time is within the operation time interval of the historical record;
the environmental sound is in a preset noise value interval;
the environment brightness is within a preset brightness value interval;
the movement information is in a preset movement speed interval;
the user type is a preset user type;
the distance information is in a preset human-computer distance interval;
the operation information is preset operation information;
the attitude information is a preset terminal attitude.
Optionally, the interaction mode includes at least one of: a driving mode, a motion mode, a touch mode, a voice mode, an old man mode, a child mode, and a safety mode.
Optionally, the current operation information includes at least one of the following: touch control operation, air separation operation and voice operation; the touch operation meets the operation type of the interaction mode, the air-separating operation meets the operation type of the interaction mode, and the voice operation meets the operation type of the interaction mode.
Optionally, the determining module is specifically configured to:
determining an interaction mode corresponding to the current scene information according to the corresponding relation between the scene information and the interaction mode; or the like, or, alternatively,
and determining an interaction mode corresponding to the current scene information according to a preset rule.
Optionally, the correspondence between the scene information and the interaction mode includes at least one of the following:
the corresponding relation between the scene information set by the mobile terminal from the factory and the interaction mode;
determining the corresponding relation between the scene information and the interaction mode according to the setting operation;
the method comprises the following steps of carrying out deep learning on historical scene information to obtain a corresponding relation between the scene information and an interaction mode, wherein the historical scene information comprises at least one of the following: operation mode, operation time, operation environment, terminal parameters and interaction mode.
Optionally, the determining module is specifically configured to:
determining an interaction mode to be selected corresponding to the current scene information according to the corresponding relation between the scene information and the interaction mode;
if the number of the interaction modes to be selected is one, the interaction mode corresponding to the current scene information is the interaction mode to be selected; and/or the presence of a gas in the gas,
and if the number of the interaction modes to be selected is at least two, determining the interaction mode corresponding to the current scene information according to the priority of the at least two interaction modes to be selected.
Optionally, the determining module is further configured to: and determining the priority of the at least two interaction modes to be selected according to the priority of the scene information corresponding to the at least two interaction modes to be selected.
Optionally, the determining, according to the priorities of the at least two interaction modes to be selected, an interaction mode corresponding to the current scene information includes:
determining the interaction mode to be selected corresponding to the current operation information, and controlling the mobile terminal to operate the corresponding interaction mode to be selected; or the like, or, alternatively,
and if the interaction mode to be selected corresponding to the current operation information does not exist, controlling the mobile terminal to run a default interaction mode or run an interaction mode corresponding to the operation information.
Optionally, the apparatus further includes:
and the operation module is used for operating the current interaction mode by the mobile terminal if the preset operation is not received within the preset time length, or operating the interaction mode corresponding to the current scene information by the mobile terminal.
In a fourth aspect, the present application provides an operating device for a mobile terminal, including:
the acquisition module is used for acquiring current scene information if the acquired current operation information for the mobile terminal meets a first preset condition;
the determining module is used for determining an interaction mode according to the current operation information and/or the current scene information;
and the operation module is used for operating the interaction mode.
Optionally, the current operation information includes at least one of the following: touch control operation, air separation operation and voice operation;
the first preset condition comprises at least one of the following:
the operation information is a control operation for the mobile terminal, wherein the control operation comprises at least one of the following operations: waking up, lighting up and controlling.
Optionally, the determining module is specifically configured to:
if the first interaction mode corresponding to the current operation information is the same as the second interaction mode corresponding to the current scene information, determining that the interaction mode is the first interaction mode; or the like, or, alternatively,
and if the first interaction mode corresponding to the current operation information is different from the second interaction mode corresponding to the current scene information, determining the interaction mode according to a preset rule.
Optionally, the determining the interaction mode according to the preset rule includes at least one of the following:
determining the interaction mode according to the priority of a first interaction mode corresponding to the current operation information and the priority of a second interaction mode corresponding to the current scene information;
and determining the interaction mode according to the priority of the current operation information and the current scene information. In a fifth aspect, the present application provides a mobile terminal, comprising: a memory and a processor;
a memory for storing processor-executable instructions;
a processor for implementing the method of operating a mobile terminal as described in the first aspect above when the computer program is executed.
In a sixth aspect, the present application provides a mobile terminal, comprising: a memory and a processor;
a memory for storing processor-executable instructions;
a processor for implementing the method of operating a mobile terminal as described in the second aspect when the computer program is executed.
In a seventh aspect, the present application provides a computer-readable storage medium, in which computer-executable instructions are stored, and the computer-executable instructions are executed by a processor to implement the method for operating a mobile terminal according to the first aspect.
In an eighth aspect, the present application provides a computer-readable storage medium, in which computer-executable instructions are stored, and the computer-executable instructions are executed by a processor to implement the method for operating a mobile terminal according to the second aspect.
According to the mobile terminal control method and device and the mobile terminal, the current scene information of the scene where the mobile terminal is located at present is detected, if the current scene information of the mobile terminal meets a first preset condition, an interaction mode corresponding to the current scene information is determined, and if the current operation information of the mobile terminal meets a second preset condition, the interaction mode is operated. According to the method, whether the current scene information and the current operation information meet the preset conditions or not is judged, the interaction mode is automatically determined and operated under the condition that a user does not need to actively operate, the user can conveniently control the mobile terminal, the user experience is improved, meanwhile, the corresponding interaction mode is started when the user enters the current scene, the problem that the energy consumption is increased due to the fact that various interaction modes are started for a long time in different scenes is solved, and the energy consumption of the mobile terminal is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a mobile terminal to which the present application is applied;
fig. 2 is a schematic flowchart of a method for operating a mobile terminal according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of another method for operating a mobile terminal according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an interface for prompting input of operational information;
FIG. 5 is a schematic view of a user setup interface;
fig. 6 is a schematic flowchart of a method for operating a mobile terminal according to the present application;
fig. 7 is a schematic structural diagram of an operating device of a mobile terminal according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of another operating device of a mobile terminal according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
It should be noted that step numbers such as S201 and S202 are used herein for the purpose of more clearly and briefly describing the corresponding contents, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S202 first and then S201 in the specific implementation, but these should be within the scope of the present application.
It should be understood that the technical scheme of the application can be applied to a terminal device, and the terminal device can be specifically a smart phone, a tablet computer, a notebook computer, a desktop computer, a vehicle-mounted intelligent terminal device, and the like, and the embodiment of the application is not limited.
Fig. 1 is a schematic structural diagram of a mobile terminal applicable to the present application, where the mobile terminal may be a mobile phone, a tablet device, a computer, a digital broadcast terminal, a fitness device, a personal digital assistant, and the like. As shown in fig. 1, the mobile terminal may include one or more of the following components: a processor 1, a memory 2, a transceiver 3, a multimedia component 4, an audio component 5, a sensor component 6, a communication bus 7, etc. The processor 1, memory 2, transceiver 3, multimedia component 4, audio component 5 and sensor component 6 are interconnected and communicate via a communication bus 7.
The processor 1 is a control center of the mobile terminal, and may be a single processor or a collective term for a plurality of processing elements. For example, the processor 1 is a Central Processing Unit (CPU), and may also be an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement the embodiments of the present Application, such as: one or more microprocessors (digital signal processors, DSPs), or one or more Field Programmable Gate Arrays (FPGAs).
The memory 2 is configured to store various types of data to support operations at the mobile terminal. Examples of such data include instructions for any Application (APP) or method operating on the mobile terminal, contact data, phonebook data, messages, pictures, videos, etc. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
And the transceiver 3 is used for communicating with other communication equipment. Of course, the transceiver 3 may also be used for communicating with a communication network, such as an ethernet, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), etc. The transceiver 3 may comprise a receiving unit implementing a receiving function and a transmitting unit implementing a transmitting function.
The multimedia component 4 comprises a screen providing an output interface between the mobile terminal and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user to enable user interaction with the mobile terminal. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 4 comprises a front-facing camera and/or a rear-facing camera. The front camera and/or the rear camera may receive external multimedia data when the mobile terminal is in an operation mode, such as a photographing mode or a video mode. The mobile terminal can also acquire face information and the like of the current user through the camera.
The audio component 5 is configured to output and/or input audio signals. For example, the audio component 5 includes a Microphone (MIC) configured to receive external audio signals when the mobile terminal is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. In some embodiments, the audio assembly 5 further comprises a speaker for outputting audio signals. Illustratively, when the user performs voice interaction with the mobile terminal, the audio component 5 may receive a voice instruction of the user, and the audio component 5 may further send a voice signal to implement the interaction with the user.
The sensor assembly 6 includes one or more sensors for providing various aspects of state assessment for the mobile terminal. For example, the sensor assembly 6 may detect the open/closed state of the mobile terminal, the relative positioning of the components, such as the display and keypad of the mobile terminal, the presence or absence of user contact with the device, the orientation or acceleration/deceleration of the device, and temperature changes of the device. The sensor assembly 6 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 6 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 6 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor. The current state of the mobile terminal, that is, the current scene, can be known through different sensor components, for example, when the speed of the mobile terminal reaches a certain value, the mobile terminal can be known to be currently in a driving mode.
The communication bus 7 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 1, but it is not intended that there be only one bus or one type of bus.
It will be appreciated that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
An application scenario of the embodiment of the present application is described below with reference to the mobile terminal shown in fig. 1, and a user can control the mobile terminal through multiple interaction modes. The interaction mode of the user and the mobile terminal can be a touch mode, a handwriting pen mode, a voice mode, a telephone controlled by an earphone or volume adjustment, air-space operation control and the like. The control of the mobile terminal is independently completed between different interaction modes.
In some scenarios, a user wants to enable one or more interaction modes when entering a certain scenario, and the mobile terminal can respond to an instruction of the interaction mode only after the user actively sets part of the interaction modes to be in an on state. The method for starting the interactive mode of the mobile terminal is complex to operate and low in user experience.
In order to solve the above technical problems in the prior art, embodiments of the present application provide a method for controlling a mobile terminal, in which whether current scene information and current operation information satisfy preset conditions is respectively determined, and the interaction mode is automatically determined and operated without active operation of a user, so that the user can conveniently control the mobile terminal, user experience is improved, and meanwhile, the corresponding interaction mode is only started when the user enters the current scene, thereby avoiding a problem of energy consumption increase caused by long-time starting of various interaction modes in different scenes, and reducing energy consumption of the mobile terminal.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a schematic flowchart of a method for operating a mobile terminal according to an embodiment of the present disclosure, as shown in fig. 2, the method according to the present disclosure is executed by the mobile terminal, where the mobile terminal may be a mobile phone, a tablet device, a personal computer, and the like, and the present disclosure is not limited thereto, and the method according to the present disclosure includes:
s201, if the current scene information of the mobile terminal meets a first preset condition, determining an interaction mode corresponding to the current scene information.
Before executing the step, the current scene information of the scene where the mobile terminal is currently located needs to be detected. The mobile terminal can detect the current scene information of the current scene in real time, and can also detect the current scene information of the current scene at intervals of a preset time period, wherein the preset time period can be any set time period such as 2 seconds or 5 seconds.
Judging whether the current scene information of the mobile terminal meets a first preset condition, and if the current scene information of the mobile terminal meets the first preset condition, determining an interaction mode corresponding to the current scene information.
In a possible implementation manner, an interaction mode corresponding to the current scene information is determined according to a preset rule.
In another possible implementation manner, the interaction mode corresponding to the current scene information is determined according to the correspondence between the scene information and the interaction mode.
And acquiring a corresponding relation between the scene information and the interactive mode in advance, wherein the corresponding relation represents the type of the interactive mode needing to be started in different scenes. And comparing the current scene information with the scene information included in the corresponding relationship, determining target scene information from the scene information included in the corresponding relationship according to a comparison result, and determining that the interaction mode corresponding to the target scene information is the interaction mode corresponding to the current scene information. And each scene information corresponds to one or more interactive modes, and the interactive modes are used for interaction between the user and the mobile terminal.
Optionally, the interaction mode includes, but is not limited to, at least one of: a driving mode, a motion mode, a touch mode, a voice mode, an old man mode, a child mode, and a safety mode.
The driving mode is an interactive mode in which a mobile terminal user drives, the motion mode is an interactive mode in which the mobile terminal user moves, the touch mode is an interactive mode in which the mobile terminal starts touch screen control, the voice mode is in which the mobile terminal starts voice interaction, the old people mode is an interactive mode in which the mobile terminal user is old people, the children mode is an interactive mode in which the mobile terminal user is children, and the safety mode is. For example, the driving mode may be a voice running and air-separated operation interactive mode, and the old man mode may be a voice running mode, etc.
Illustratively, the mobile terminal learns that a user is accustomed to running a voice and air-isolated operation interaction mode when driving, so that a corresponding relation between a driving scene and a driving mode is established, and the driving mode corresponding to the driving scene is determined when current scene information is detected to be the driving scene.
S202, if the current operation information of the mobile terminal meets a second preset condition, operating an interaction mode.
And if the current operation information of the mobile terminal meets the second preset condition, the mobile terminal runs the interaction mode, and can receive an instruction sent by the user in the interaction mode and respond.
Optionally, the current operation information includes, but is not limited to, at least one of: touch control operation, air separation operation and voice operation; the touch operation meets the operation type of the interaction mode and the touch operation. The air-separating operation satisfies the operation type of the interaction mode, and may also be called an air-separating gesture operation, for example, the air-separating gesture operation uses a gesture motion without contact with the mobile terminal to control the mobile terminal. The voice operation satisfies the operation type of the interactive mode.
In this embodiment, by detecting current scene information of a scene where the mobile terminal is currently located, if the current scene information of the mobile terminal meets a first preset condition, an interaction mode corresponding to the current scene information is determined, and if the current operation information of the mobile terminal meets a second preset condition, the interaction mode is operated. According to the method, whether the current scene information and the current operation information meet the preset conditions or not is judged, the interaction mode is automatically determined and operated under the condition that a user does not need to actively operate, the user can conveniently control the mobile terminal, the user experience is improved, meanwhile, the corresponding interaction mode is started when the user enters the current scene, the problem that the energy consumption is increased due to the fact that various interaction modes are started for a long time in different scenes is solved, and the energy consumption of the mobile terminal is reduced.
Fig. 3 is a flowchart illustrating another method for operating a mobile terminal according to an embodiment of the present application, where fig. 3 is a flowchart illustrating, based on the embodiment shown in fig. 2, further, as shown in fig. 3, that in step S201, an interaction mode corresponding to current scene information is determined according to a correspondence between the scene information and the interaction mode, where the method includes S2011, S2012, and S2013:
and S2011, determining the interaction mode to be selected corresponding to the current scene information according to the corresponding relation between the scene information and the interaction mode.
The scene information has multiple categories, the current scene information may include one or more categories, and in addition, one scene information in the correspondence between the scene information and the interaction mode may correspond to multiple interaction modes, so that according to the current scene information and the correspondence between the scene information and the interaction mode, the interaction mode to be selected corresponding to one or more current scene information may be determined.
S2012, whether the number of the interaction modes to be selected is more than one.
And if the determined number of the interaction modes to be selected is one, continuing to execute the step S2013, and/or if the number of the interaction modes to be selected is at least two, continuing to execute the step S2014.
And S2013, wherein the interaction mode corresponding to the current scene information is a to-be-selected interaction mode.
If the number of the interaction modes to be selected is one, the interaction mode corresponding to the current scene information is the interaction mode to be selected, the current scene information is compared with the corresponding relation between the pre-acquired scene information and the interaction mode, the comparison result can be completely consistent or when the scene information comprises the current scene information, the target scene information is determined from the scene information included in the corresponding relation, the interaction mode corresponding to the target scene information is determined to be the interaction mode corresponding to the current scene information according to the comparison result, and the interaction mode is operated.
And S2014, determining an interaction mode corresponding to the current scene information according to the priorities of the at least two interaction modes to be selected.
If the number of the interaction modes to be selected is at least two, the interaction mode corresponding to the current scene information can be determined according to the priority of the at least two interaction modes to be selected, illustratively, the interaction mode corresponding to the current scene information can be determined to be the interaction mode with the highest priority in the interaction modes to be selected, and the interaction mode corresponding to the current scene information can also be determined to be the interaction mode with the highest priority and the next highest priority.
In this embodiment, according to the correspondence between the scene information and the interaction modes, a to-be-selected interaction mode corresponding to the current scene information is determined, if the to-be-selected interaction mode is one, the interaction mode corresponding to the current scene information is the to-be-selected interaction mode, and/or if the to-be-selected interaction modes are at least two, the interaction mode corresponding to the current scene information is determined according to the priorities of the at least two to-be-selected interaction modes, so that the interaction mode corresponding to the current scene information is determined according to the number of the to-be-selected interaction modes, the determined interaction mode better meets the user requirements, and the user experience is improved.
Optionally, on the basis of the foregoing embodiment, before S2014, the method further includes the following steps:
and determining the priority of the at least two interaction modes to be selected according to the priority of the scene information corresponding to the at least two interaction modes to be selected.
The priority of each scene information can be preset, and if the number of the interaction modes to be selected is at least two, the priority of at least two interaction modes to be selected is determined according to the priority of the scene information corresponding to the at least two interaction modes to be selected.
The method of the present application is described below with the current scene information including various types of current time information, current environment information, current user information, and current terminal information.
In a possible implementation manner, each type of scene information may be compared with a correspondence between pre-established scene information and interaction modes, and target scene information is determined from the scene information included in the correspondence according to a comparison result, so that the interaction mode corresponding to the target scene information is determined to be the interaction mode corresponding to the current type of scene information. The corresponding interaction mode can be determined through each type of scene information, and all interaction modes determined according to different types of current scene information can be operated.
In another possible implementation manner, the correspondence may be compared according to priorities of different types of scene information, the type scene information with the highest priority is compared with the correspondence between the pre-established scene information and the interaction mode, the scene information included in the correspondence matched with the type scene information is obtained, where the scene information may also include other types of scene information, the type scene information with the highest priority is compared with the compared scene information according to the priority order, and so on, so as to finally determine the interaction mode and operate the interaction mode. For example, the priority of the current user information is set to be the highest, then the current operating environment information is set to be the current time information, and then under the condition that the user information using the mobile terminal exists in the current scene information, the interaction mode corresponding to the user information using the mobile terminal is screened out in the corresponding relation, then the corresponding interaction mode matched with the current operating environment information is screened out in the screened interaction mode, and so on, the interaction mode is screened out finally, and the interaction mode is operated.
In another possible implementation manner, priorities of different types of scene information may be set to compare the correspondence, only the type of scene information with the highest priority in the current scene information is compared with the correspondence between the pre-established scene information and the interaction mode to obtain the interaction mode, and the interaction mode is directly operated.
In another possible implementation manner, when an interaction mode corresponding to any scene is set in the correspondence relationship between the pre-established scene information and the interaction mode as a voice mode, the voice mode is turned on when the mobile terminal is powered on, for example, because the old or the people with control disorder use the interaction modes such as touch, the mobile terminal is relatively easy to control by using the voice mode, and the mobile terminal is turned on after the old or the people with control disorder is set to be turned on.
The following describes, by way of example, a scenario when the current scene information is of different types, and it is understood that the following embodiments do not limit the present application.
On the basis of the above embodiment, further, when the current scene information includes the current time information, S202 includes the steps of:
and comparing the current time information with the scene information included in the corresponding relationship.
And determining target scene information from the scene information included in the corresponding relationship according to the comparison result, wherein the current time belongs to the time range included in the target scene information.
And determining the interaction mode corresponding to the target scene information as the target interaction mode corresponding to the current scene information.
In a possible implementation manner, candidate scene information of which the scene information includes time information is determined from the obtained corresponding relationship, the current time information is compared with the candidate scene information, and when the current time information belongs to a time range included in the candidate scene information, the candidate scene information is determined to be target scene information, so that an interaction mode corresponding to the target scene information is determined to be a target interaction mode corresponding to the current scene information.
In another possible implementation manner, the current time information is compared with the scene information in the obtained corresponding relationship one by one, and when the current time information belongs to a time range included in any scene information, the scene information is determined to be the target scene information, so that the interaction mode corresponding to the target scene information is determined to be the target interaction mode corresponding to the current scene information.
Illustratively, in the correspondence relationship between the scene information and the interaction mode, in a time range from 7:00 to 8:00, the user gets up and washes one's face and rinses one's mouth, the interaction mode in the correspondence relationship is a voice mode, in a time range from 9:00 to 12:00, the user works one's office, and the interaction mode in the correspondence relationship is a touch mode, for example, when the current scene information is the current time information and the current time is 7:00, the current scene information included in the correspondence relationship is compared with the scene information included in the context relationship, and the comparison result is: 7:00 is in the time range of 7:00-8:00, the interactive mode corresponding to the scene information 7:00-8:00 is a voice mode, therefore, the mobile terminal can actively run the response of the voice mode, and can receive the voice instruction of the user; the mobile terminal may also determine whether to run the voice mode according to current operation information of the user, where the current operation information is, for example, touch operation, and may display an interface prompting whether to start the voice mode in a form of a popup window or a jump interface, and fig. 4 is an interface schematic diagram prompting to input operation information, and as shown in fig. 4, the prompt information pops up in a form of a popup window 401, and the user may run the voice mode by clicking "yes" or may not run the voice module by clicking "no". For another example, when the current scene information is the current time information and the current time is 9:00, comparing the 9:00 with the scene information included in the corresponding relationship, and the comparison result is: 9:00 is in the time range of 9:00-12:00, the interaction mode corresponding to the scene information 9:00-12:00 is the touch mode, therefore, the mobile terminal starts the response of the touch mode and can receive the touch instruction of the user.
In this embodiment, when the current scene information includes the current time information, the current time information is compared with the scene information included in the corresponding relationship, and it is determined according to the comparison result that the target scene information is determined from the scene information included in the corresponding relationship, where the current time belongs to the time range included in the target scene information, and the interaction mode corresponding to the target scene information is determined to be the target interaction mode corresponding to the current scene information. When the current scene information is the current time information, the target interaction mode is automatically determined and started under the condition that a user does not need to actively start according to the current time information and the corresponding relation between the scene information and the interaction mode, the user can conveniently control the mobile terminal, the user experience is improved, meanwhile, the corresponding interaction mode is started when the user enters the current scene, the problem of energy consumption increase caused by the fact that various interaction modes are started for a long time in different scenes is solved, and the energy consumption of the mobile terminal is reduced.
On the basis of the above embodiment, further, when the current scene information includes the current environment information, S201 includes the steps of:
and comparing the current operating environment information with the scene information included in the corresponding relationship.
And determining target scene information from the scene information included in the corresponding relationship according to the comparison result, wherein the current operating environment information conforms to the operating environment included in the target scene information.
And determining the interaction mode corresponding to the target scene information as the interaction mode corresponding to the current scene information.
In a possible implementation manner, candidate scene information in which the scene information includes the operation environment information is determined from the obtained corresponding relationship, the current operation environment information is compared with the candidate scene information, and when the current operation environment information conforms to the operation environment included in the candidate scene information, the candidate scene information is determined to be the target scene information, so that the interaction mode corresponding to the target scene information is determined to be the target interaction mode corresponding to the current scene information.
In another possible implementation manner, the current operating environment information is compared with the scene information in the obtained corresponding relationship one by one, and when the current operating environment information belongs to an operating environment included in any scene information, the scene information is determined to be target scene information, so that the interaction mode corresponding to the target scene information is determined to be a target interaction mode corresponding to the current scene information.
For example, in the corresponding relationship between the scene information and the interaction mode, an inclination angle formed by the mobile terminal and a horizontal plane is detected, when the inclination angle is within a certain range, it is determined that the posture of the mobile terminal is in a standing state, and at this time, the mobile terminal plays a video, the mobile terminal receives an incoming call, and at this time, the interaction mode in the corresponding relationship is an idle operation mode. For example, when the current scene information is the current operating environment information, detecting an inclination angle formed by the mobile terminal and a horizontal plane and the mobile terminal playing a video file, receiving an incoming call, comparing the current operating environment information with the scene information included in the corresponding relationship, and the comparison result is: the inclination angle is within a certain range, the mobile terminal plays a video file, an incoming call is received at the moment, the current scene information is determined to be the scene information, and the corresponding interaction mode is the air-separating operation mode, so that the mobile terminal starts a response of the air-separating operation mode and can receive an air-separating operation instruction of a user.
Optionally, determining that the interaction mode corresponding to the target scene information is the interaction mode corresponding to the current scene information may include the following steps:
after the target scene information is determined, timing is started.
And when the timing duration reaches a threshold value, if the current operating environment information of the mobile terminal is not changed, determining that the interaction mode corresponding to the target scene information is the target interaction mode corresponding to the current scene information.
In some scenarios, a user may enter a scenario due to a user mishandling, the user may not want to enter the scenario, for example, when a user mistakenly operates and opens an APP, and the like, the mobile terminal detects the current scene information, and then starts the corresponding interaction mode according to the current scene information, the actual user does not need to start the interaction mode, after finding that the user mistakenly operates and opens an APP, the APP may be turned off soon, so that timing may be started after the target scene information is determined, the timing duration may be set to a threshold, the threshold value can be 1 minute, or any other set time length, when the timing time length reaches the threshold value, if the current operating environment information of the mobile terminal is not changed, the scene is not the scene entered by the misoperation of the user, determining that the interaction mode corresponding to the target scene information is the target interaction mode corresponding to the current scene information. By setting the threshold value, when the timing duration reaches the threshold value, the operating environment information where the mobile terminal is located is detected to be unchanged, so that the user is determined not to be in misoperation but really enters the operating environment, and then the target interaction mode is started, so that the mobile terminal can start the target interaction mode more accurately, the user condition is met better, the user experience is improved, and meanwhile, the unnecessary interaction mode is prevented from being started during misoperation, and the energy consumption is reduced.
In this embodiment, the current operating environment information is compared with the scene information included in the corresponding relationship, and the target scene information is determined from the scene information included in the corresponding relationship according to the comparison result, where the current operating environment information conforms to the operating environment included in the target scene information, and the interaction mode corresponding to the target scene information is determined to be the target interaction mode corresponding to the current scene information. When the current scene information is the operation environment information, the target interaction mode is automatically determined and started under the condition that a user does not need to actively start according to the current operation environment information and the corresponding relation between the scene information and the interaction mode, the user can conveniently control the mobile terminal, the user experience is improved, meanwhile, the corresponding interaction mode is started when the user enters the current scene, the problem of energy consumption increase caused by the fact that various interaction modes are started for a long time in different scenes is solved, and the energy consumption of the mobile terminal is reduced.
On the basis of the above embodiment, further, when the current scene information includes the current user information, S201 includes the following steps:
and comparing the current user information with the scene information included in the corresponding relationship.
And determining target scene information from the scene information included in the corresponding relationship according to the comparison result, wherein the current user information is the same as the user information included in the target scene information.
And determining the interaction mode corresponding to the target scene information as the target interaction mode corresponding to the current scene information.
In a possible implementation manner, candidate scene information of which the scene information includes current user information is determined from the obtained corresponding relationship, the current user information is compared with the candidate scene information, and when the current user information is the same as the user information included in the target scene information, the candidate scene information is determined to be the target scene information, so that the interaction mode corresponding to the target scene information is determined to be the target interaction mode corresponding to the current scene information.
In another possible implementation manner, the current user information is compared with the scene information in the obtained corresponding relationship one by one, and when the current user information is the same as the user information included in any scene information, the scene information is determined to be the target scene information, so that the interaction mode corresponding to the target scene information is determined to be the target interaction mode corresponding to the current scene information.
For example, when the mobile terminal detects that a person approaches, the type of the user picking up or approaching the mobile terminal is detected, the user currently using the mobile terminal may be determined according to fingerprint information when the user unlocks, or the user currently using the mobile terminal may be determined by performing face recognition on the mobile terminal, or the age group of the user currently using the mobile terminal is determined. In the corresponding relationship between the scene information and the interaction mode, for a certain user who uses the mobile terminal, an operation mode of the corresponding scene information can be set, and when the current user is actually detected, the scene information containing the user information is matched as the target scene information. And the corresponding target interaction mode can be started when the age of the user is detected to belong to the set age group aiming at the crowds such as children or old people. For example, when the age of the user currently using the mobile terminal is detected, the current operating environment information is compared with the scene information included in the correspondence relationship, and the comparison result is: the age of the user belongs to a preset age range of children, and the interaction mode corresponding to the scene information is a voice mode, so that the mobile terminal starts a response of the voice mode and can receive a voice instruction of the user.
In this embodiment, when the current scene information includes the current user information, the current user information is compared with the scene information included in the corresponding relationship through S501, and the target scene information is determined from the scene information included in the corresponding relationship according to the comparison result, where the current user information is the same as the user information included in the target scene information, and the interaction mode corresponding to the target scene information is determined to be the target interaction mode corresponding to the current scene information. When the current scene information is the current time information, the target interaction mode is automatically determined and started according to the current user information and the corresponding relation between the scene information and the interaction mode, the target interaction mode is automatically determined and started under the condition that the user does not need to actively start, the user can conveniently control the mobile terminal, the user experience is improved, meanwhile, the corresponding interaction mode is started when the user enters the current scene, the problem of energy consumption increase caused by the fact that various interaction modes are started for a long time in different scenes is solved, and the energy consumption of the mobile terminal is reduced.
Optionally, on the basis of the foregoing embodiment, further in this embodiment, the correspondence between the scene information and the interaction mode includes, but is not limited to, at least one of the following:
the corresponding relation between the factory-set scene information of the mobile terminal and the interaction mode;
determining the corresponding relation between the scene information and the interaction mode according to the setting operation;
the method comprises the following steps of carrying out deep learning on historical scene information to obtain a corresponding relation between the scene information and an interaction mode, wherein the historical scene information comprises at least one of the following: operation mode, operation time, operation environment, terminal parameters and interaction mode.
The corresponding relationship between the scene information and the interaction mode may be factory set of the mobile terminal, may also be set in advance according to a setting operation, may also be determined by the mobile terminal according to historical scene information, that is, the use habit data of the user, and may also be determined by the three together. The mobile terminal performs deep learning on historical scene information to form a corresponding relation, and the historical scene information is the setting of a user on the scene information of the mobile terminal collected within a period of time.
According to the setting data, determining the corresponding relationship between the scene information and the interaction mode, the user can set the corresponding relationship between the scene information and the interaction mode on the mobile terminal in advance according to the own use habit, which is described below by taking an example of an operation mode that the user sets corresponding at different times, fig. 5 is a schematic diagram of a user setting interface, as shown in fig. 5, entering a setting scene interaction mode, the set scene interaction mode being displayed as an area 501 in the diagram, the user can see the scene information and the corresponding interaction mode, the setting item can be opened or closed by clicking a button on the right side of each setting item in the area 501, the button in the diagram is only schematic, the function can be realized by other modes, and the application is not limited. The setting item modification page can be jumped or popped up by clicking on the area of the corresponding setting item in 501 to modify the setting item. The setting item can be added by clicking the button 502 to jump or popping up the setting item adding page, the button 502 in the figure is only schematic, and the function can also be realized by other modes, which is not limited in the present application.
The method comprises the steps of determining scene information and an interaction mode according to use habit data of a user, wherein the use habit data of the user are the interaction modes used by the user in different scenes, collecting the use habit data of the user in advance, learning the collected use habit data of the user, and establishing a corresponding relation between the scene information and the interaction modes. Illustratively, the scene information and interaction pattern may be determined as follows: detecting an instruction of opening any one or more interactive modes by a user, starting to acquire scene information of the mobile terminal, when detecting that the scene information of the mobile terminal changes, establishing a corresponding relation of the use habits of the user between the changed scene information and the interactive mode or modes opened by the user, and storing the established corresponding relation of the use habits of the user as the use habit data of the user. The corresponding relation included in each use habit data can be stored as the corresponding relation between the scene information and the interaction mode, or a storage threshold value can be set, the corresponding relation between a certain established user use habit recorded in the use habit data of the user exceeds the storage threshold value, the corresponding relation between the user use habit can be stored as the corresponding relation between the scene information and the interaction mode, for example, the storage threshold value can be set to be 3, the user can start the voice interaction mode at the early washing time of 7:00-8:00, so the corresponding relation between the time of 7:00-8:00 and the voice interaction mode can be established, when the corresponding relation is recorded to exceed 3 times, the user habit can be indicated to use the voice interaction mode at 7:00-8:00, so the time of 7:00-8:00 and the voice interaction mode can be stored as the corresponding relation between the scene information and the interaction mode, and then the mobile terminal can automatically switch the voice interaction mode when detecting the time period of which the current time is 7:00-8: 00.
The recorded use habit data of the user can be increased along with the use of the mobile terminal by the user, the corresponding relation between the scene information and the interaction mode is established in advance by recording the use habit data of the user, the mobile terminal continuously learns the use habit of the user, and when the user does not start the interaction mode, the mobile terminal can automatically start the corresponding interaction mode according to the corresponding relation between the scene information established by the learned use habit data of the user and the interaction mode, so that the user experience is improved.
In the embodiment, the corresponding relation between the scene information and the interaction mode is determined according to the setting operation and/or the historical scene information, so that the determined corresponding relation between the scene information and the interaction mode is more in line with the actual requirements of the user, the target interaction mode started by applying the method of the application is more intelligent, the user experience is improved,
optionally, on the basis of any one of the foregoing embodiments, further, after S202, the method for operating a mobile terminal of this embodiment further includes the following steps:
and if the scene where the mobile terminal is currently located changes, closing the interaction mode.
If the scene where the mobile terminal is currently located changes, it is indicated that the mobile terminal enters another scene now, and the interactive mode started for the scene before the change may not be suitable for the current scene, so the interactive mode may be closed. Optionally, the mobile terminal may perform matching detection on the current scene through the method provided in the embodiment of the present application, and determine the interaction mode that needs to be currently operated.
According to the embodiment, when the current scene of the mobile terminal changes, the interaction mode of the current mobile terminal is closed, and when the scene changes, the interaction mode which is required to be opened in the previous scene is closed, so that energy consumption is saved, user experience is improved, and then the corresponding interaction mode is opened more intelligently when the user enters the current scene, the problem of energy consumption increase caused by opening various interaction modes in different scenes for a long time is avoided, and the energy consumption of the mobile terminal is reduced.
Fig. 6 is a schematic flowchart of a method for operating a mobile terminal according to the present application, as shown in fig. 6, the method of this embodiment is executed by the mobile terminal, which may be a mobile phone, a tablet device, a personal computer, and the like, and the present application is not limited thereto, and the concepts in the following embodiments are the same as those in the foregoing embodiments and are not repeated herein, and the method according to this embodiment includes:
s601, if the acquired current operation information aiming at the mobile terminal meets a second preset condition, acquiring current scene information.
And receiving preset operation information which can trigger the acquisition of the current scene information, wherein the mobile terminal considers that the current operation information meets a second preset condition, and if the current operation information of the mobile terminal meets the second preset condition, the current scene information is acquired.
Optionally, the current operation information includes at least one of the following: touch control operation, air separation operation and voice operation. The touch operation meets the operation type of the interaction mode and the touch operation. The air-separating operation satisfies the operation type of the interaction mode, and may also be called an air-separating gesture operation, for example, the air-separating gesture operation uses a gesture motion without contact with the mobile terminal to control the mobile terminal. The voice operation satisfies the operation type of the interactive mode.
Optionally, the current operation information is a control operation on the mobile terminal, where the control operation includes at least one of the following: waking up, lighting up and controlling.
The method comprises the steps of awakening the mobile terminal to convert the mobile terminal from a standby state into a process of receiving an operation instruction, lightening the mobile terminal to convert the mobile terminal from a black screen into a bright screen, and controlling the mobile terminal to control the mobile terminal through various different operations.
S602, determining an interaction mode according to the current operation information and/or the current scene information.
The interactive module can be determined according to the current operation information or the current scene information, and the interactive mode can be determined according to the current operation information and the current scene information.
Optionally, S602 includes:
if the first interaction mode corresponding to the current operation information is the same as the second interaction mode corresponding to the current scene information, determining that the interaction mode is the first interaction mode; or the like, or, alternatively,
and if the first interaction mode corresponding to the current operation information is different from the second interaction mode corresponding to the current scene information, determining the interaction mode according to a preset rule.
Optionally, the interaction mode is determined according to a preset rule, and includes at least one of the following:
determining an interaction mode according to the priority of a first interaction mode corresponding to the current operation information and the priority of a second interaction mode corresponding to the current scene information;
and determining an interaction mode according to the priority of the current operation information and the current scene information.
If the first interaction mode corresponding to the current operation information is different from the second interaction mode corresponding to the current scene information, the first interaction mode corresponding to the current operation information and the second interaction mode corresponding to the current scene information can be determined first, the priorities of the first interaction mode and the second interaction mode are compared, and the interaction mode is a mode with a high priority. Or the priority of the current operation information and the current scene information can be determined first, and the interaction mode corresponding to the information with high priority is determined to be the final interaction mode.
And S603, operating an interaction mode.
In this embodiment, by detecting the current operation information of the mobile terminal, if the obtained current operation information for the mobile terminal meets the second preset condition, the current scene information is obtained. And determining an interaction mode according to the current operation information and/or the current scene information, and operating the interaction mode. According to the method, the interaction mode is automatically determined and operated under the condition that the user does not need to actively operate according to whether the current scene information and the current operation information meet the preset conditions, the user can conveniently control the mobile terminal, the user experience is improved, meanwhile, the corresponding interaction mode is started when the user enters the current scene and/or executes the current operation information, the problem of energy consumption increase caused by the fact that various interaction modes are started for a long time in different scenes is avoided, and the energy consumption of the mobile terminal is reduced.
Fig. 7 is a schematic structural diagram of an operating device of a mobile terminal according to an embodiment of the present application, and as shown in fig. 7, the operating device according to the embodiment includes:
a determining module 701, configured to determine, if current scene information of the mobile terminal meets a first preset condition, an interaction mode corresponding to the current scene information;
an operation module 702, configured to operate the interaction mode if the current operation information of the mobile terminal meets a second preset condition.
Optionally, the current scene information includes at least one of the following: current time information, current environment information, current user information and current terminal information;
alternatively to this, the first and second parts may,
current time information including at least one of: daily time, operating time;
current context information, including at least one of: ambient sound, ambient brightness, movement information;
current user information, including at least one of: user type, distance information, fingerprint information, face information;
current terminal information, including at least one of: terminal parameters, operational information, and attitude information.
Optionally, the current scene information satisfies a first preset condition, and includes at least one of the following:
the daily time is within a preset time period;
the operation time is within the operation time interval of the historical record;
the environmental sound is in a preset noise value interval;
the ambient brightness is within a preset brightness value interval;
the mobile information is in a preset mobile speed interval;
the user type is a preset user type;
the distance information is in a preset human-computer distance interval;
the operation information is preset operation information;
the attitude information is a preset terminal attitude.
Optionally, the interaction mode includes at least one of: a driving mode, a motion mode, a touch mode, a voice mode, an old man mode, a child mode, and a safety mode.
Optionally, the current operation information includes at least one of the following: touch control operation, air separation operation and voice operation; the touch operation meets the operation type of the interaction mode, the air-separating operation meets the operation type of the interaction mode, and the voice operation meets the operation type of the interaction mode.
Optionally, the determining module 701 is specifically configured to:
determining an interaction mode corresponding to the current scene information according to the corresponding relation between the scene information and the interaction mode; or the like, or, alternatively,
and determining an interaction mode corresponding to the current scene information according to a preset rule.
Optionally, the correspondence between the scene information and the interaction mode includes at least one of the following:
the corresponding relation between the factory-set scene information of the mobile terminal and the interaction mode;
determining the corresponding relation between the scene information and the interaction mode according to the setting operation;
the method comprises the following steps of carrying out deep learning on historical scene information to obtain a corresponding relation between the scene information and an interaction mode, wherein the historical scene information comprises at least one of the following: operation mode, operation time, operation environment, terminal parameters and interaction mode.
Optionally, the determining module 701 is specifically configured to:
determining a to-be-selected interaction mode corresponding to the current scene information according to the corresponding relation between the scene information and the interaction mode;
if the number of the interaction modes to be selected is one, the interaction mode corresponding to the current scene information is the interaction mode to be selected; and/or the presence of a gas in the gas,
and if the number of the interaction modes to be selected is at least two, determining the interaction mode corresponding to the current scene information according to the priority of the at least two interaction modes to be selected.
Optionally, the determining module is further configured to: and determining the priority of the at least two interaction modes to be selected according to the priority of the scene information corresponding to the at least two interaction modes to be selected.
Optionally, determining the interaction mode corresponding to the current scene information according to the priorities of the at least two interaction modes to be selected includes:
determining a to-be-selected interaction mode corresponding to the current operation information, and controlling the mobile terminal to operate the corresponding to-be-selected interaction mode; or the like, or, alternatively,
and if the interaction mode to be selected corresponding to the current operation information does not exist, controlling the mobile terminal to operate a default interaction mode or operate an interaction mode corresponding to the operation information.
Optionally, the apparatus further comprises:
and the operation module is used for operating the current interaction mode by the mobile terminal or operating the interaction mode corresponding to the current scene information by the mobile terminal if the preset operation is not received within the preset duration.
Fig. 8 is a schematic structural diagram of another control device of a mobile terminal provided in the present application, and as shown in fig. 8, the device provided in this embodiment includes:
an obtaining module 801, configured to obtain current scene information if the obtained current operation information for the mobile terminal meets a first preset condition;
a determining module 802, configured to determine an interaction mode according to current operation information and/or current scene information;
an operation module 803 is used for operating the interaction mode.
Optionally, the current operation information includes at least one of the following: touch control operation, air separation operation and voice operation;
the first preset condition includes at least one of:
the operation information is control operation of the mobile terminal, wherein the control operation comprises at least one of the following operations: waking up, lighting up and controlling.
Optionally, the determining module 802 is specifically configured to:
if the first interaction mode corresponding to the current operation information is the same as the second interaction mode corresponding to the current scene information, determining that the interaction mode is the first interaction mode; or the like, or, alternatively,
and if the first interaction mode corresponding to the current operation information is different from the second interaction mode corresponding to the current scene information, determining the interaction mode according to a preset rule.
Optionally, the interaction mode is determined according to a preset rule, and includes at least one of the following:
determining an interaction mode according to the priority of the interaction mode corresponding to the current operation information and the interaction mode corresponding to the current scene information;
and determining an interaction mode according to the priority of the current operation information and the current scene information.
The apparatus of the foregoing embodiment may be configured to implement the technical solution of the foregoing method embodiment, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 9 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application, and as shown in fig. 9, the mobile terminal according to the embodiment includes: a memory 902 and a processor 901.
A memory 902 for storing instructions executable by the processor 901.
A processor 901, configured to implement the method for operating the mobile terminal according to the embodiment shown in fig. 2 or fig. 3 when the executable instructions are executed.
Fig. 10 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application, and as shown in fig. 10, the mobile terminal according to the embodiment includes: a memory 102 and a processor 101.
A memory 102 for storing instructions executable by the processor 101.
The processor 101 is configured to implement the method for operating the mobile terminal according to the embodiment shown in fig. 4 when the executable instructions are executed.
The apparatus of the foregoing embodiment may be configured to implement the technical solution of the foregoing method embodiment, and the implementation principle and the technical effect are similar, which are not described herein again.
The present application further provides a terminal, the terminal including: a memory, a processor and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the method as described above.
The present application also provides a computer storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method as described above.
Embodiments of the present application also provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method as described in the above various possible embodiments.
An embodiment of the present application further provides a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method described in the above various possible embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (17)

1. A method for controlling a mobile terminal, comprising:
if the current scene information of the mobile terminal meets a first preset condition, determining an interaction mode corresponding to the current scene information;
and if the current operation information of the mobile terminal meets a second preset condition, operating the interactive mode.
2. The method of claim 1, wherein the current scene information comprises at least one of: current time information, current environment information, current user information and current terminal information;
the current time information comprises at least one of the following: daily time, operating time;
the current environment information includes at least one of: ambient sound, ambient brightness, movement information;
the current user information comprises at least one of the following: user type, distance information, fingerprint information, face information;
the current terminal information includes at least one of the following: terminal parameters, operational information, and attitude information.
3. The method according to claim 2, wherein the current scene information satisfies a first preset condition, and includes at least one of:
the daily time is within a preset time period;
the operation time is within the operation time interval of the historical record;
the environmental sound is in a preset noise value interval;
the environment brightness is within a preset brightness value interval;
the movement information is in a preset movement speed interval;
the user type is a preset user type;
the distance information is in a preset human-computer distance interval;
the operation information is preset operation information;
the attitude information is a preset terminal attitude.
4. The method of claim 1, wherein the interaction pattern comprises at least one of: a driving mode, a motion mode, a touch mode, a voice mode, an old man mode, a child mode, and a safety mode.
5. The method of claim 1, wherein the current operation information comprises at least one of: touch control operation, air separation operation and voice operation; the touch operation meets the operation type of the interaction mode, the air-separating operation meets the operation type of the interaction mode, and the voice operation meets the operation type of the interaction mode.
6. The method according to any one of claims 1 to 5, wherein the determining the interaction mode corresponding to the current scene information includes:
determining an interaction mode corresponding to the current scene information according to the corresponding relation between the scene information and the interaction mode; or the like, or, alternatively,
and determining an interaction mode corresponding to the current scene information according to a preset rule.
7. The method of claim 6, wherein the correspondence between the scene information and the interaction mode comprises at least one of:
the corresponding relation between the scene information set by the mobile terminal from the factory and the interaction mode;
determining the corresponding relation between the scene information and the interaction mode according to the setting operation;
the method comprises the following steps of carrying out deep learning on historical scene information to obtain a corresponding relation between the scene information and an interaction mode, wherein the historical scene information comprises at least one of the following: operation mode, operation time, operation environment, terminal parameters and interaction mode.
8. The method according to claim 6, wherein the determining the interaction mode corresponding to the current scene information according to the correspondence between the scene information and the interaction mode comprises:
determining an interaction mode to be selected corresponding to the current scene information according to the corresponding relation between the scene information and the interaction mode;
if the number of the interaction modes to be selected is one, the interaction mode corresponding to the current scene information is the interaction mode to be selected; and/or the presence of a gas in the gas,
and if the number of the interaction modes to be selected is at least two, determining the interaction mode corresponding to the current scene information according to the priority of the at least two interaction modes to be selected.
9. The method according to claim 8, wherein before determining the interaction mode corresponding to the current scene information according to the priority of the at least two interaction modes to be selected, the method further comprises:
and determining the priority of the at least two interaction modes to be selected according to the priority of the scene information corresponding to the at least two interaction modes to be selected.
10. The method according to claim 8, wherein the determining the interaction mode corresponding to the current scene information according to the priorities of the at least two interaction modes to be selected comprises:
determining the interaction mode to be selected corresponding to the current operation information, and controlling the mobile terminal to operate the corresponding interaction mode to be selected; or the like, or, alternatively,
and if the interaction mode to be selected corresponding to the current operation information does not exist, controlling the mobile terminal to run a default interaction mode or run an interaction mode corresponding to the operation information.
11. The method according to any one of claims 1 to 5, wherein after the running the interaction mode if the mobile terminal receives a preset operation, further comprising:
and if the preset operation is not received within the preset duration, the mobile terminal operates the current interaction mode, or the mobile terminal operates the interaction mode corresponding to the current scene information.
12. A method for controlling a mobile terminal, comprising:
if the acquired current operation information for the mobile terminal meets a second preset condition, acquiring current scene information;
determining an interaction mode according to the current operation information and/or the current scene information;
and operating the interaction mode.
13. The method of claim 12, wherein the current operation information comprises at least one of: touch control operation, air separation operation and voice operation;
the current operation information is a control operation on the mobile terminal, wherein the control operation includes at least one of the following: waking up, lighting up and controlling.
14. The method according to claim 13, wherein the determining an interaction mode according to the current operation information and/or the current scene information comprises:
if the first interaction mode corresponding to the current operation information is the same as the second interaction mode corresponding to the current scene information, determining that the interaction mode is the first interaction mode; or the like, or, alternatively,
and if the first interaction mode corresponding to the current operation information is different from the second interaction mode corresponding to the current scene information, determining the interaction mode according to a preset rule.
15. The method according to claim 14, wherein the determining the interaction mode according to the preset rule comprises at least one of:
determining the interaction mode according to the priority of a first interaction mode corresponding to the current operation information and the priority of a second interaction mode corresponding to the current scene information;
and determining the interaction mode according to the priority of the current operation information and the current scene information.
16. A mobile terminal, comprising: a memory and a processor;
a memory for storing a computer program executable by the processor;
a processor for implementing the method of handling a mobile terminal as claimed in any one of claims 1 to 11 or 12 to 15 when executing the computer program.
17. A computer-readable storage medium, in which a computer program is stored, which, when executed by a processor, implements the manipulation method of the mobile terminal according to any one of claims 1 to 11 or 12 to 15.
CN202010773263.1A 2020-08-04 2020-08-04 Mobile terminal control method, mobile terminal and storage medium Active CN111897477B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010773263.1A CN111897477B (en) 2020-08-04 2020-08-04 Mobile terminal control method, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010773263.1A CN111897477B (en) 2020-08-04 2020-08-04 Mobile terminal control method, mobile terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111897477A true CN111897477A (en) 2020-11-06
CN111897477B CN111897477B (en) 2022-06-17

Family

ID=73183338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010773263.1A Active CN111897477B (en) 2020-08-04 2020-08-04 Mobile terminal control method, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111897477B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115079822A (en) * 2022-05-31 2022-09-20 荣耀终端有限公司 Air-spaced gesture interaction method and device, electronic chip and electronic equipment
WO2023005362A1 (en) * 2021-07-30 2023-02-02 深圳传音控股股份有限公司 Processing method, processing device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793057A (en) * 2014-01-26 2014-05-14 华为终端有限公司 Information processing method, device and equipment
US20170116784A1 (en) * 2015-10-21 2017-04-27 International Business Machines Corporation Interacting with data fields on a page using augmented reality
CN108491067A (en) * 2018-02-07 2018-09-04 深圳还是威健康科技有限公司 Intelligent fan control method, intelligent fan and computer readable storage medium
CN110109596A (en) * 2019-05-08 2019-08-09 芋头科技(杭州)有限公司 Recommended method, device and the controller and medium of interactive mode
CN110308800A (en) * 2019-06-24 2019-10-08 北京百度网讯科技有限公司 Switching method, device, system and the storage medium of input mode

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793057A (en) * 2014-01-26 2014-05-14 华为终端有限公司 Information processing method, device and equipment
US20170116784A1 (en) * 2015-10-21 2017-04-27 International Business Machines Corporation Interacting with data fields on a page using augmented reality
CN108491067A (en) * 2018-02-07 2018-09-04 深圳还是威健康科技有限公司 Intelligent fan control method, intelligent fan and computer readable storage medium
CN110109596A (en) * 2019-05-08 2019-08-09 芋头科技(杭州)有限公司 Recommended method, device and the controller and medium of interactive mode
CN110308800A (en) * 2019-06-24 2019-10-08 北京百度网讯科技有限公司 Switching method, device, system and the storage medium of input mode

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023005362A1 (en) * 2021-07-30 2023-02-02 深圳传音控股股份有限公司 Processing method, processing device and storage medium
CN115079822A (en) * 2022-05-31 2022-09-20 荣耀终端有限公司 Air-spaced gesture interaction method and device, electronic chip and electronic equipment

Also Published As

Publication number Publication date
CN111897477B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN105512685B (en) Object identification method and device
JP6374986B2 (en) Face recognition method, apparatus and terminal
US11455491B2 (en) Method and device for training image recognition model, and storage medium
CN104284013A (en) Electronic device and control method thereof
US10230891B2 (en) Method, device and medium of photography prompts
CN111897477B (en) Mobile terminal control method, mobile terminal and storage medium
CN109600303A (en) Content share method, device and storage medium
CN111988493B (en) Interaction processing method, device, equipment and storage medium
CN110619873A (en) Audio processing method, device and storage medium
CN107919124A (en) Equipment awakening method and device
CN111984347A (en) Interaction processing method, device, equipment and storage medium
CN105824955A (en) Short message clustering method and device
CN104573642B (en) Face identification method and device
CN109522058B (en) Wake-up method, device, terminal and storage medium
CN106455002A (en) Wireless search method and device, and terminal
US20180167500A1 (en) Method, device and storage medium for outputting communication message
CN106128415B (en) Screen luminance adjustment method and device
EP4290338A1 (en) Method and apparatus for inputting information, and storage medium
CN108345886A (en) A kind of video flowing text recognition method and device
CN108877742A (en) Luminance regulating method and device
CN106453981B (en) Electronic equipment method for processing voice messages and device
CN108962189A (en) Luminance regulating method and device
CN108922495A (en) Screen luminance adjustment method and device
CN113315904B (en) Shooting method, shooting device and storage medium
CN109144286A (en) A kind of input method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant