CN114020190A - Terminal control method, intelligent terminal and storage medium - Google Patents

Terminal control method, intelligent terminal and storage medium Download PDF

Info

Publication number
CN114020190A
CN114020190A CN202111359469.0A CN202111359469A CN114020190A CN 114020190 A CN114020190 A CN 114020190A CN 202111359469 A CN202111359469 A CN 202111359469A CN 114020190 A CN114020190 A CN 114020190A
Authority
CN
China
Prior art keywords
content information
target
determining
user intention
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111359469.0A
Other languages
Chinese (zh)
Inventor
康红霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Chuanying Information Technology Co Ltd
Original Assignee
Shanghai Chuanying Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Chuanying Information Technology Co Ltd filed Critical Shanghai Chuanying Information Technology Co Ltd
Priority to CN202111359469.0A priority Critical patent/CN114020190A/en
Publication of CN114020190A publication Critical patent/CN114020190A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1407General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a terminal control method, an intelligent terminal and a storage medium, wherein the terminal control method comprises the following steps: when a target control instruction is received, determining a target area corresponding to the target control instruction; acquiring content information corresponding to the target area, and determining a target shortcut entrance associated with the content information; and displaying the target shortcut entrance. According to the method and the device, when the target control instruction is received, the content information is obtained from the target area corresponding to the target control instruction, so that the corresponding target shortcut entrance is automatically updated and displayed when the content information changes, and the user experience is improved.

Description

Terminal control method, intelligent terminal and storage medium
Technical Field
The application relates to the technical field of intelligent terminals, in particular to a terminal control method, an intelligent terminal and a storage medium.
Background
In the using process of the intelligent terminal, the shortcut entrance associated with each display page of the intelligent terminal is fixed.
In the course of conceiving and implementing the present application, the inventors found that at least the following problems existed: if the application function corresponding to the shortcut entry cannot meet the current user requirement, the application function corresponding to the shortcut entry can only be changed and set manually, and the operation process of manually changing the application function corresponding to the shortcut entry is complex, so that the user experience is not facilitated.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
In view of the above technical problems, the present application provides a terminal control method, an intelligent terminal and a storage medium, so that a corresponding target shortcut entry can be automatically updated and displayed according to a change of content information of a display page of the intelligent terminal, and the operation is simple and convenient.
In order to solve the above technical problem, the present application provides a terminal control method, which can be applied to an intelligent terminal, and includes:
when a target control instruction is received, determining a target area corresponding to the target control instruction;
acquiring content information corresponding to the target area, and determining a target shortcut entrance associated with the content information;
and displaying the target shortcut entrance.
Optionally, the step of obtaining content information corresponding to the target area and determining a target shortcut entry associated with the content information includes:
acquiring content information corresponding to the target area, and determining the attribute corresponding to the content information;
and determining a target shortcut entrance according to the attribute and the content information.
Optionally, the step of determining a target shortcut entry according to the attribute and the content information includes:
determining a processing mode corresponding to the attribute;
processing the content information based on the processing mode to obtain the user intention;
and determining a target shortcut entrance according to the user intention.
Optionally, the attributes include images, text, and/or files.
Optionally, when the attribute is an image, the processing manner corresponding to the image includes: at least one of image content classification processing, image description processing, and image character extraction processing; when the attribute is a text, the processing mode corresponding to the text comprises at least one of text translation processing and character-to-speech processing; when the attribute is a file, the corresponding processing mode of the file comprises the following steps: and (5) file classification processing.
Optionally, the step of processing the content information based on the processing manner to obtain the user intention includes:
inputting content information into a first preset neural network model for image content classification processing to obtain face image information in the content information, and optionally, training the first preset neural network model according to initial image content information with different image attributes;
and obtaining the user intention according to the face image information.
Optionally, the step of processing the content information based on the processing manner to obtain the user intention further includes:
inputting the content information into a second preset neural network model for text translation processing to obtain voice information corresponding to the content information, and optionally, training the second preset neural network model according to the initial text content information to obtain the voice information;
and obtaining the user intention according to the voice information.
Optionally, the step of processing the content information based on the processing manner to obtain the user intention further includes:
inputting the content information into a third preset neural network model for file classification processing to obtain target file information in the content information, and optionally, training the third preset neural network model according to initial file content information with different file attributes to obtain the target file information;
and obtaining the user intention according to the target file information.
Optionally, the step of obtaining content information corresponding to the target area and determining a target shortcut entry associated with the content information includes:
acquiring content information corresponding to the target area and sending the content information to a server so that the server determines user intention according to the content information and sends the user intention to the intelligent terminal;
and receiving the user intention, and determining a target shortcut entrance according to the user intention.
Optionally, the user intent is determined based on the content information and/or user historical operational data.
Optionally, after the step of obtaining the content information corresponding to the target area and determining the target shortcut entry associated with the content information, the method further includes:
when the application program corresponding to the target shortcut entrance is not installed, executing at least one of the following operations:
automatically installing an application program corresponding to the target shortcut entrance;
prompting to install an application program corresponding to the target shortcut entrance;
and prompting that the application program corresponding to the target shortcut entrance is not installed.
Optionally, after the application program corresponding to the target shortcut entry is installed, the target shortcut entry is displayed.
Optionally, the display mode of the target shortcut entrance includes at least one of:
displaying the target shortcut entrance in a suspension manner;
displaying the target shortcut entry on a menu bar;
and displaying the target shortcut entries in a list form.
Optionally, the target control instruction includes: at least one of a voice control operation, a clear slide operation, and a touch operation.
The application further provides an intelligent terminal, the intelligent terminal includes: a memory, a processor, wherein the memory has stored thereon a terminal control program, which when executed by the processor implements the steps of the method as recited in any one of the above.
The present application also provides a computer-readable storage medium, which stores a computer program that, when executed by a processor, performs the steps of the method according to any one of the above.
As described above, the terminal control method of the present application includes the steps of: when a target control instruction is received, determining a target area corresponding to the target control instruction; acquiring content information corresponding to the target area, and determining a target shortcut entrance associated with the content information; and displaying the target shortcut entrance. Through the technical scheme, the corresponding target shortcut entry can be automatically updated and displayed according to the change of the display content of the display page of the intelligent terminal, the problem that the target shortcut entry cannot be changed in a self-adaptive manner is solved, and the user experience is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic diagram of a hardware structure of an intelligent terminal implementing various embodiments of the present application;
fig. 2 is a communication network system architecture diagram according to an embodiment of the present application;
fig. 3 is a flowchart illustrating a terminal control method according to a first embodiment;
fig. 4 is a diagram showing a terminal control method text content information multi-finger length press interface according to the first embodiment;
fig. 5 is a diagram showing a terminal control method target shortcut entry interface according to the first embodiment;
fig. 6 is a flowchart illustrating a terminal control method according to a second embodiment;
fig. 7 is a flowchart illustrating a terminal control method according to a third embodiment;
fig. 8 is a flowchart illustrating a terminal control method according to a fourth embodiment.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings. With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or," "and/or," "including at least one of the following," and the like, as used herein, are to be construed as inclusive or mean any one or any combination. For example, "includes at least one of: A. b, C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C ", again for example," A, B or C "or" A, B and/or C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C'. An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It should be noted that step numbers such as S10 and S20 are used herein for the purpose of more clearly and briefly describing the corresponding content, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S20 first and then S10 in specific implementation, which should be within the scope of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
The smart terminal may be implemented in various forms. For example, the smart terminal described in the present application may include smart terminals such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and fixed terminals such as a Digital TV, a desktop computer, and the like.
The following description will be given taking a mobile terminal as an example, and it will be understood by those skilled in the art that the configuration according to the embodiment of the present application can be applied to a fixed type terminal in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present application, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The various components of the mobile terminal are optionally described below in conjunction with fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and optionally, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex-Long Term Evolution), TDD-LTE (Time Division duplex-Long Term Evolution, Time Division Long Term Evolution), 5G, and so on.
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics Processing Unit 1041 Processing image data of a still image or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Optionally, the light sensor includes an ambient light sensor that may adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 1061 and/or the backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Alternatively, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Optionally, the touch detection device detects a touch orientation of a user, detects a signal caused by a touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Optionally, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited thereto.
Alternatively, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a program storage area and a data storage area, and optionally, the program storage area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor, optionally, the application processor mainly handles operating systems, user interfaces, application programs, etc., and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present application, a communication network system on which the mobile terminal of the present application is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present disclosure, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Optionally, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Alternatively, the eNodeB2021 may be connected with other enodebs 2022 through a backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. Optionally, the MME2031 is a control node that handles signaling between the UE201 and the EPC203, providing bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present application is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems (e.g. 5G), and the like.
Based on the above mobile terminal hardware structure and communication network system, various embodiments of the present application are provided.
First embodiment
Step S10, when a target control instruction is received, determining a target area corresponding to the target control instruction;
step S20, acquiring content information corresponding to the target area, and determining a target shortcut entrance associated with the content information;
and step S30, displaying the target shortcut entrance.
In this embodiment, referring to fig. 3, a display page of the intelligent terminal is entered; determining whether a target control instruction is received at the display page; and when a target control instruction is received, further determining a target area corresponding to the target control instruction on the display page. And identifying the content information in the target area to determine a target shortcut entrance associated with the content information, and displaying the target shortcut entrance on a display page. It can be understood that when the content information changes, the target shortcut entry corresponding to the content information also changes correspondingly, so that the target shortcut entry is automatically updated according to the content information, and the user experience is improved.
In this embodiment, before the target control command is received, the display page of the intelligent terminal needs to be entered first. Optionally, the display page may be a local desktop; and the display page of any application on the intelligent terminal can be displayed. Optionally, a text tool on the intelligent terminal, such as a display page of a notepad, a short message, etc., can be selected to be opened; image display tools and the like on the intelligent terminal can be selected to be opened, such as application tools like a local gallery, a camera or a beautiful show and the like; and optionally opening a resource management tool, such as a file management tool and the like, on the intelligent terminal. Optionally, a control instruction to enter the display page may be set, such as at least one of a touch control, a gesture control, a voice control, or a sliding control over the air; alternatively, the WeChat application can be opened by voice to enter the display page of the WeChat application; the display page can be entered by a multi-finger-length pressing mode; the display page can also be entered by double-finger question mark or letter drawing.
In this embodiment, after entering the display page of the intelligent terminal; and determining whether a target control instruction is received on the display page, and determining a target area corresponding to the target control instruction. Optionally, the target control instruction may be preset, and the target control instruction may be at least one of touch control, gesture control, voice control, and air slide control. Optionally, the touch control instruction may be at least one of a click, a double click, and a long press; the gesture of the air-separating sliding control instruction, and the relative position and distance between the intelligent terminal and the display screen when the air-separating sliding control instruction is executed can be preset according to actual conditions. It should be noted here that the target control instruction should be distinguished from the control instruction entering the display page, that is, when the control instruction entering the display page is a control instruction of multi-finger length pressing, the target control instruction should be a control instruction other than the multi-finger length pressing. The target area is any position on the display page. Alternatively, a multi-finger long press may be applied to the content information of the display page or to an area near the content information as a target area.
In this embodiment, after determining a target area corresponding to the target control instruction on the display page, content information corresponding to the target area is obtained, and a target shortcut entry associated with the content information is determined. Optionally, content information corresponding to the target area is obtained, and an attribute corresponding to the content information is determined; and determining a target shortcut entry according to the attribute and the content information, wherein the target shortcut entry can be a shortcut entry of an application program. Besides determining the target shortcut entrance according to the attribute and the content information, the target shortcut entrance can be determined according to the system state or the device information of the current intelligent terminal.
Optionally, the attributes include images, text, and/or files.
Optionally, the recognition mode of the text can be preset; the text can be recognized by words, sentences and paragraphs. When the words are identified, the words in the target area are selected; when the sentence is identified, the sentence in which the target area is located is selected; when the paragraph is identified, the paragraph in which the word and sentence of the target area is located is selected. Alternatively, referring to fig. 4, when the text "300 coupons are withheld after personal data is filled up and not superimposed" exists on the display page, when the set text is identified by paragraph, and a multi-fingerprinted control command is received to act on a target area on or near the paragraph, it is determined that the paragraph "300 coupons are withheld after personal data is filled up and not superimposed" is selected.
The identification mode of the image can be preset; the identification can be carried out according to the place, the time and the classification. When the location is identified, the image related to the location is selected; when the images are identified according to time, the images related to or similar to the time are selected; when the images are identified according to the classification, the images with the same attribute are selected.
The identification mode of the file can be preset; the identification can be carried out according to attributes, names and time. When the files are identified according to the attributes, the files with the same attribute are selected; when the file is identified according to the name, the file related to the name is selected; when the files are identified according to time, the files related or similar in time are selected.
Optionally, the target area may be highlighted, bolded, underlined, etc. to alert the user of the target area and the selected state.
Optionally, when the content information corresponding to the target area is identified, a corresponding identification algorithm may be determined according to an actual situation, that is, when the identified content information changes, a function corresponding to the associated target shortcut entry also changes correspondingly.
Optionally, referring to fig. 5, when identifying a target area of a text, if the current display page is a display page of a text tool, and the identified content information is a key word such as "public number" or "push", the information associated with the content information may be information such as a short book and a headline, and in combination with the state information and the device information of the intelligent terminal, a target shortcut entry associated with the current display page may be determined.
Optionally, when the target area of the file is identified, if the current display page is a display page for file management, the target shortcut entry associated with the current display page can be determined by identifying the file name displayed on the current display page, analyzing information such as a file suffix name and file content to obtain keyword information, and combining the current state information of the intelligent terminal or the use habit of the user.
In this embodiment, after the target shortcut entry is determined, the target shortcut entry is displayed on the current display page. Optionally, the target shortcut entry may be displayed in a floating manner on a current display page of the intelligent terminal, and attribute information such as a display shape, a display position, a display color, whether the floating window is movable or not, and whether the floating window is hidden or not may be set. Optionally, the target shortcut entry may be displayed in a display menu bar of a currently displayed page of the intelligent terminal. Optionally, the target shortcut entries may be displayed in a list form in a current display page of the intelligent terminal, and a sorting order of each target shortcut entry in the list may be set.
In the technical scheme of the embodiment, when a target control instruction is received, a target area corresponding to the target control instruction is determined; acquiring content information corresponding to the target area, and determining a target shortcut entrance associated with the content information; according to the technical scheme, the corresponding target shortcut entry function can be automatically updated and displayed according to the change of the display content of the display page of the intelligent terminal, the problem that the shortcut entry cannot be changed in a self-adaptive manner is solved, and the user experience is improved.
Second embodiment
Step S21, obtaining content information corresponding to the target area, and determining the attribute corresponding to the content information;
and step S22, determining a target shortcut entrance according to the attributes and the content information.
In this embodiment, referring to fig. 6, content information corresponding to the target area is obtained, and an attribute corresponding to the content information is determined; and determining a target shortcut entrance according to the attribute and the content information.
Optionally, the attribute may be at least one of an image, a text, a file, and a video, and may also be any combination of any two of the above attributes. Optionally, in an application scenario, referring to fig. 4, when receiving a target control instruction of multi-finger length pressing, the intelligent terminal determines a target area through the action of multi-finger length pressing as a frame in fig. 4; the content information in the frame is obtained as a paragraph of 'consuming the 300 discount coupons after filling in the personal data upwards without overlapping', and the attribute corresponding to the content information is determined to be a text. The target shortcut entrance, i.e. the application in the floating window, shown in fig. 5 is thus further determined according to the text and the content information "consume withholding 300 coupon after filling up personal data, without superimposing" in the target area.
In the technical solution of this embodiment, after determining the attribute corresponding to the content information by using the content information corresponding to the acquired target area, the target shortcut entry is further automatically updated according to the content information and the attribute corresponding to the content information.
Third embodiment
Step S221, determining a processing mode corresponding to the attribute;
step S222, processing the content information based on the processing mode to obtain the user intention;
and step S223, determining the target shortcut entrance according to the user intention.
In this embodiment, referring to fig. 7, after determining the attribute corresponding to the content information, the processing manner corresponding to the attribute is determined. And processing the content information according to the processing mode to obtain the user intention. After the user intention is obtained, determining a target shortcut entrance according to the user intention.
Optionally, the processing mode corresponding to each attribute is different. Optionally, when the attribute is an image, the processing manner corresponding to the image includes: at least one of image content classification processing, image description processing, and image character extraction processing; when the attribute is a text, the processing mode corresponding to the text comprises at least one of text translation processing and character-to-speech processing; and when the attribute is a file, the processing mode corresponding to the file comprises file classification processing and the like. The user intention represents a next operation that the user wants to perform on the content information. The user intent is determined based on the content information and/or user historical operational data.
Optionally, in processing the content information based on the processing manner, the user intention may be obtained in the following manners:
optionally, when the attribute is an image, when the content information is subjected to image content classification processing, the content information may be input into a first preset neural network model for image content classification processing, so as to obtain face image information in the content information. And obtaining the user intention according to the face image information. Optionally, the first preset neural network model is obtained by training according to initial image content information of different image attributes; the image attribute can be an image attribute such as an animal image, a human face image, a landscape image and the like; the initial image content information may be at least one of a format of the image, an image name, an image capturing time, and an image capturing place. The duration of the training can also be set according to the quality of the picture.
In other application scenarios, when performing other processing, optionally image description processing or image text extraction processing, on the content information, the content information may be respectively input into corresponding neural network models for training, so as to obtain the user intention. It should be noted here that the processing modes corresponding to different attributes are different, and the processing modes of different functions of the same attribute are also different, and the neural network models used for the image content classification processing, the image description processing, or the image text extraction processing are all different.
Optionally, when the attribute is a text, and when the content information is subjected to text translation processing, the content information may be input into a second preset neural network model for text translation processing, so as to obtain voice information corresponding to the content information. And obtaining the user intention according to the voice information. Optionally, the second preset neural network model is obtained by training according to initial text content information; the initial text content information may be a text title or a heavily tagged word, etc. The length of training may be set according to the quality of the text labels. In addition, in the process of text translation processing, a third-party interface can be called to carry out inter-translation of multiple languages.
In other application scenarios, when the content information is subjected to other processing, optionally text-to-speech processing, the content information can be input into a corresponding neural network model for training, so as to obtain the user intention.
Optionally, when the attribute is a file, and when the content information is subjected to file classification processing, the content information may be further input into a third preset neural network model for file classification processing, so as to obtain target file information in the content information. And obtaining the user intention according to the target file information. Optionally, the third preset neural network model is obtained by training according to initial file content information of different file attributes, where the initial file content information may be content information such as a file name, a file creation time, a file suffix name, and a file creator. Alternatively, if the file is currently in the file management page, the file name displayed on the display page is identified, and the target file information is obtained by analyzing information such as a file suffix name, file content and the like. And further determining the user intention according to the target file information.
In the technical solution of this embodiment, the processing manners corresponding to different attributes are different, after the attribute corresponding to the content information is determined, the processing manner corresponding to the attribute is determined, the content information is processed based on the processing manner, the user intention is obtained, and further, the target shortcut entry is determined according to the user intention.
Fourth embodiment
Step S41, acquiring content information corresponding to the target area and sending the content information to a server, so that the server determines user intention according to the content information and sends the user intention to the intelligent terminal;
and step S42, receiving the user intention, and determining a target shortcut entrance according to the user intention.
In this embodiment, referring to fig. 8, after the intelligent terminal acquires the content information corresponding to the target area, the intelligent terminal may process itself to determine the user intention corresponding to the content information. The intelligent terminal can also send the content information to a server after acquiring the content information corresponding to the target area, and the server is utilized to process the content information. Optionally, when receiving the content information, the server determines an attribute corresponding to the content information, determines a processing mode corresponding to the attribute, processes the content information based on the processing mode, so as to obtain a user intention, and sends the user intention to the intelligent terminal. And after receiving the user intention, the intelligent terminal determines a target shortcut entrance according to the user intention. Optionally, the server may also determine the user intent specifically from the content information and/or the user historical operational data.
In the technical scheme of this embodiment, content information corresponding to a target area is acquired, and the content information is sent to a server, so that the server determines a user intention according to the content information, and after receiving the user intention, an intelligent terminal further determines a target shortcut entry according to the user intention.
Fifth embodiment
After the content information corresponding to the target area is obtained and the target shortcut entrance associated with the content information is determined, whether an application program corresponding to the target shortcut entrance is installed on the intelligent terminal needs to be judged. When the application program corresponding to the target shortcut entry is not installed on the intelligent terminal, the following steps can be executed: and automatically installing the application program corresponding to the target shortcut entrance, prompting to install the application program corresponding to the target shortcut entrance, or prompting not to install the application program corresponding to the target shortcut entrance. After the at least one operation is executed and the installation of the application program corresponding to the target shortcut entry is completed, the target shortcut entry is displayed.
The embodiment of the application further provides an intelligent terminal, which comprises a memory and a processor, wherein a terminal control program is stored in the memory, and the steps of the terminal control method in any embodiment are realized when the terminal control program is executed by the processor.
The embodiment of the present application further provides a computer-readable storage medium, where a terminal control program is stored on the storage medium, and the terminal control program, when executed by a processor, implements the steps of the terminal control method in any of the above embodiments.
In the embodiments of the intelligent terminal and the computer storage medium provided in the present application, all technical features of any one of the above embodiments of the terminal control method may be included, and the expanding and explaining contents of the specification are basically the same as those of the above embodiments of the method, and are not described herein again.
Embodiments of the present application also provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method in the above various possible embodiments.
Embodiments of the present application further provide a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method in the above various possible embodiments.
It is to be understood that the foregoing scenarios are only examples, and do not constitute a limitation on application scenarios of the technical solutions provided in the embodiments of the present application, and the technical solutions of the present application may also be applied to other scenarios. Alternatively, as can be known by those skilled in the art, with the evolution of the system architecture and the appearance of new service scenarios, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device in the embodiment of the application can be merged, divided and deleted according to actual needs.
In the present application, the same or similar term concepts, technical solutions and/or application scenario descriptions will be generally described only in detail at the first occurrence, and when the description is repeated later, the detailed description will not be repeated in general for brevity, and when understanding the technical solutions and the like of the present application, reference may be made to the related detailed description before the description for the same or similar term concepts, technical solutions and/or application scenario descriptions and the like which are not described in detail later.
In the present application, each embodiment is described with emphasis, and reference may be made to the description of other embodiments for parts that are not described or illustrated in any embodiment.
The technical features of the technical solution of the present application may be arbitrarily combined, and for brevity of description, all possible combinations of the technical features in the embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the scope of the present application should be considered as being described in the present application.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present application or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as above, and includes several instructions for enabling an intelligent terminal (e.g., a mobile phone, a computer, a server, a controlled terminal, or a network device) to execute the method of each embodiment of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer storage medium or transmitted from one computer storage medium to another computer storage medium, or alternatively, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (optionally coaxial cable, fiber optic, digital subscriber line) or wirelessly (optionally infrared, wireless, microwave, etc.). A computer storage medium may be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more available media. The usable medium may be a magnetic medium (optionally a floppy Disk, a memory Disk, a magnetic tape), an optical medium (optionally a DVD), or a semiconductor medium (optionally a Solid State Disk (SSD)), among others.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. A terminal control method, characterized in that the method comprises the steps of:
s10: when a target control instruction is received, determining a target area corresponding to the target control instruction;
s20: acquiring content information corresponding to the target area, and determining a target shortcut entrance associated with the content information;
s30: and displaying the target shortcut entrance.
2. The method of claim 1, wherein the step of S20 includes:
acquiring content information corresponding to the target area, and determining the attribute corresponding to the content information;
and determining a target shortcut entrance according to the attribute and the content information.
3. The method of claim 2, wherein the step of determining a target shortcut entry based on the attribute and the content information comprises:
determining a processing mode corresponding to the attribute;
processing the content information based on the processing mode to obtain the user intention;
and determining a target shortcut entrance according to the user intention.
4. The method of claim 3, wherein the step of processing the content information based on the processing manner to obtain the user intention comprises at least one of:
inputting content information into a first preset neural network model for image content classification processing to obtain face image information in the content information, and obtaining user intention according to the face image information; or the like, or, alternatively,
inputting the content information into a second preset neural network model for text translation processing to obtain voice information corresponding to the content information, and obtaining user intention according to the voice information; or the like, or, alternatively,
and inputting the content information into a third preset neural network model for file classification processing to obtain target file information in the content information, and obtaining the user intention according to the target file information.
5. The method according to any one of claims 1 to 4, wherein the step S20 includes:
acquiring content information corresponding to the target area and sending the content information to a server so that the server determines user intention according to the content information and sends the user intention to the intelligent terminal;
and receiving the user intention, and determining a target shortcut entrance according to the user intention.
6. The method of claim 5, wherein the user intent is determined based on the content information and/or user historical operational data.
7. The method according to any one of claims 1 to 4, wherein after the step of S20, further comprising:
when the application program corresponding to the target shortcut entrance is not installed, executing at least one of the following operations:
automatically installing an application program corresponding to the target shortcut entrance;
prompting to install an application program corresponding to the target shortcut entrance;
and prompting that the application program corresponding to the target shortcut entrance is not installed.
8. The method of claim 7, wherein the target shortcut entry is displayed after the application installation corresponding to the target shortcut entry is completed.
9. An intelligent terminal, characterized in that, intelligent terminal includes: memory, processor, wherein the memory has stored thereon a terminal control program which, when executed by the processor, implements the steps of the terminal control method according to any one of claims 1 to 8.
10. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the terminal control method according to any one of claims 1 to 8.
CN202111359469.0A 2021-11-16 2021-11-16 Terminal control method, intelligent terminal and storage medium Pending CN114020190A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111359469.0A CN114020190A (en) 2021-11-16 2021-11-16 Terminal control method, intelligent terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111359469.0A CN114020190A (en) 2021-11-16 2021-11-16 Terminal control method, intelligent terminal and storage medium

Publications (1)

Publication Number Publication Date
CN114020190A true CN114020190A (en) 2022-02-08

Family

ID=80064756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111359469.0A Pending CN114020190A (en) 2021-11-16 2021-11-16 Terminal control method, intelligent terminal and storage medium

Country Status (1)

Country Link
CN (1) CN114020190A (en)

Similar Documents

Publication Publication Date Title
CN108572764B (en) Character input control method and device and computer readable storage medium
CN114398113A (en) Interface display method, intelligent terminal and storage medium
CN113126844A (en) Display method, terminal and storage medium
CN112163148A (en) Information display method, mobile terminal and storage medium
CN114860674B (en) File processing method, intelligent terminal and storage medium
CN115914719A (en) Screen projection display method, intelligent terminal and storage medium
CN114138144A (en) Control method, intelligent terminal and storage medium
CN114442886A (en) Data processing method, intelligent terminal and storage medium
CN114443199A (en) Interface processing method, intelligent terminal and storage medium
CN109656658B (en) Editing object processing method and device and computer readable storage medium
CN113342246A (en) Operation method, mobile terminal and storage medium
CN113555002A (en) Data processing method, mobile terminal and storage medium
CN112199019A (en) Interaction method, terminal and computer readable storage medium
CN112199964A (en) Text translation method, electronic device and readable storage medium
CN114020190A (en) Terminal control method, intelligent terminal and storage medium
WO2023097446A1 (en) Video processing method, smart terminal, and storage medium
CN114139492A (en) Image processing method, intelligent terminal and storage medium
CN114546196A (en) Display method, intelligent terminal and storage medium
CN113805767A (en) Information processing method, mobile terminal and storage medium
CN115809670A (en) Content translation method, intelligent terminal and storage medium
CN113743134A (en) Translation method, mobile terminal and storage medium
CN114327184A (en) Data management method, intelligent terminal and storage medium
CN114118013A (en) Image processing method, mobile terminal and storage medium
CN114020392A (en) Information processing method, intelligent terminal and storage medium
CN114995730A (en) Information display method, mobile terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication