CN112199019A - Interaction method, terminal and computer readable storage medium - Google Patents

Interaction method, terminal and computer readable storage medium Download PDF

Info

Publication number
CN112199019A
CN112199019A CN202011108069.8A CN202011108069A CN112199019A CN 112199019 A CN112199019 A CN 112199019A CN 202011108069 A CN202011108069 A CN 202011108069A CN 112199019 A CN112199019 A CN 112199019A
Authority
CN
China
Prior art keywords
preset
information
target
target object
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011108069.8A
Other languages
Chinese (zh)
Inventor
杜嵩楠
金宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Microphone Holdings Co Ltd
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Microphone Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Microphone Holdings Co Ltd filed Critical Shenzhen Microphone Holdings Co Ltd
Priority to CN202011108069.8A priority Critical patent/CN112199019A/en
Publication of CN112199019A publication Critical patent/CN112199019A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Compared with the prior art that a preset interface is required to be manually read to obtain a target object to be selected, the interaction method provided by the application can directly obtain and output the target object to be selected from the preset interface when the first operation information meeting the first preset condition is received, and correspondingly process the second operation information when the second operation information of the target object to be selected is received, so that the interaction method can improve the information obtaining efficiency and improve the information obtaining speed.

Description

Interaction method, terminal and computer readable storage medium
Technical Field
The present application relates to the field of electronic technologies, and in particular, to an interaction method, a terminal, and a computer-readable storage medium.
Background
Currently, with the development of internet technology and the penetration of internet applications to user learning, work, and life, people increasingly acquire and process information through networks. But the speed at which the user acquires and processes information in a conventional piece-by-piece reading manner is slow, which may degrade the user experience.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
The application provides an interaction method, a terminal and a computer readable storage medium, which can solve the technical problems of low information acquisition efficiency and low information processing speed in the prior art.
The application provides an interaction method, which comprises the following steps:
s11, receiving first operation information aiming at a preset interface;
s12, if the first operation information meets a first preset condition, acquiring and outputting a target object to be selected from the preset interface;
and S13, receiving second operation information aiming at the target object to be selected, and executing corresponding processing.
Optionally, the first operation information includes at least one of:
operation gestures, operation positions, operation tracks, operation duration and voice control; and/or the presence of a gas in the gas,
the first preset condition comprises at least one of the following:
the operation gesture is a preset gesture;
the operation position is a preset position;
the operation track is a preset track;
the voice command controlled by voice is a preset command.
Optionally, before the step of outputting the target object to be selected, the method further includes:
and determining a target object to be selected corresponding to the first operation information.
Optionally, determining the target object to be selected corresponding to the first operation information includes:
determining target information based on the first operation information;
and acquiring the target information as a target object to be selected and outputting the target information.
Optionally, the step of S13, including:
judging whether the target object is determined according to the second operation information or not;
if yes, judging whether the second operation information meets a second preset condition or not;
and if so, executing processing corresponding to the second operation information on the target object.
Optionally, the second operation information includes at least one of:
long press operation, heavy press operation, drag operation, and slide operation; and/or the presence of a gas in the gas,
the second preset condition comprises at least one of the following:
the long pressing operation reaches a preset duration;
the heavy pressing operation reaches a preset pressure;
the dragging operation is a preset dragging direction;
the sliding operation is a preset sliding track.
Optionally, the preset interface includes at least one of:
a web page interface;
an application interface;
an interface containing preset content, the preset content comprising: at least one of text, picture, web address, video, audio.
Optionally, the target object comprises at least one of:
text, files, pictures, emoticons, video, audio, applications, web sites, links.
Optionally, the processing comprises at least one of:
copying, sharing, collecting, saving, downloading, opening a link, opening an application, playing a video and playing an audio.
Optionally, before the step S12, the method further includes:
receiving third operation information aiming at a target object to be selected;
and if the third operation information meets a third preset condition, performing preset operation on the target object to be selected.
Optionally, the third operation information includes at least one of:
an operation position, an operation track and an operation gesture; and/or the presence of a gas in the gas,
the third preset condition comprises at least one of the following:
the operation position is a preset position;
the operation track is a preset track;
the operation gesture is a preset gesture.
Optionally, the preset operation includes at least one of:
adjusting the sequence of the target objects to be selected;
adjusting the output mode of the target object to be selected;
and adjusting the output information of the target object to be selected.
Optionally, before the step of S11, the method includes:
judging whether the current state of the mobile terminal is a preset state or not;
if yes, the step S11 is executed.
Optionally, the preset state includes at least one of:
the terminal starts an intelligent mode;
the terminal starts a preset application;
and the terminal displays a preset interface.
The application also provides an interaction method, which comprises the following steps:
determining a target page to be extracted;
receiving an input instruction of the target page, and determining target key information to be extracted of the target page based on the input instruction;
and extracting target key information in the target page.
Optionally, receiving an input instruction to the target page, and determining target key information to be extracted from the target page based on the input instruction includes:
receiving a touch instruction of an upper target page of the intelligent terminal;
determining the information type of target key information based on the touch instruction, wherein the information type of the target key information comprises characters, websites, pictures, emoticons, videos and application links;
and taking the key information corresponding to the information type on the target page as the target key information to be extracted from the target page.
Optionally, determining an information category of the target key information based on the touch instruction includes:
determining a touch position of the touch instruction on an intelligent terminal, and determining that the touch position corresponds to a touch area on the intelligent terminal;
and determining the information type associated with the touch area as the information type of the target key information.
Optionally, determining an information category of the target key information based on the touch instruction includes:
determining selected information of the touch instruction in the target page;
and determining the information type corresponding to the selected information as the information type of the target key information.
Optionally, receiving an input instruction to the target page, and determining target key information to be extracted from the target page based on the input instruction includes:
recording a voice command based on a microphone;
and analyzing the voice instruction to obtain voice information, and determining target key information to be extracted from the voice information.
Optionally, receiving an input instruction to the target page, and determining target key information to be extracted from the target page based on the input instruction includes:
receiving a character input instruction for the target page;
inputting the input characters, and taking the input characters as target key information to be extracted of the target page.
Optionally, receiving an input instruction to the target page, and determining target key information to be extracted from the target page based on the input instruction further includes: displaying at least one information category of the target key information on a user interface of the intelligent terminal;
the receiving an input instruction to the target page, and determining target key information to be extracted from the target page based on the input instruction includes:
receiving a sliding operation instruction of a user interface of the intelligent terminal, and determining the sliding distance of the sliding operation instruction on the intelligent terminal;
determining the selected information type based on the sliding distance;
and taking the key information corresponding to the information type in the target page as the target key information to be extracted from the target page.
Optionally, determining a target page to be extracted includes:
determining a target folder based on a selection instruction, and taking all documents in the target folder as target pages to be extracted;
or, acquiring a selection instruction of the website on the intelligent terminal, and taking the webpage corresponding to the selected website as a target page to be extracted.
Optionally, determining a target page to be extracted includes:
determining a current open page in a target application program;
and determining a target page to be extracted from the current open page.
Optionally, determining a target page to be extracted from the currently opened page includes:
receiving a selection instruction of the current open page, and taking the selected current open page as a target page to be extracted;
or, taking each current open page as a target page to be extracted.
Optionally, the extracting the target key information in the target page includes:
inquiring all information in the target page, and determining each piece of information which meets the key information in the target page as alternative information;
and removing repeated information in the alternative information, and taking the removed alternative information as the target key information in the extracted target page.
Optionally, the extracting the target key information in the target page includes:
inquiring all information in the target page, and determining each piece of information which meets the key information in the target page as alternative information;
determining the repeated occurrence times of the alternative information, and taking the first N alternative information as the target key information in the extracted target page, wherein N is a positive integer greater than 1.
Optionally, after extracting the target key information in the target page, the interaction method further includes:
and outputting a key information extraction report, wherein the key information extraction report at least comprises a target page field and a target key information field, and the type of the key information extraction report is a Word document, an Excel document or/and a PPT document.
Optionally, after extracting the target key information in the target page, the interaction method further includes:
displaying a floating window on a user interface of the intelligent terminal, and displaying the extracted target key information in the target page in the floating window;
and processing the target key information based on the touch instruction of the floating window.
Optionally, displaying a floating window on a user interface of the intelligent terminal, and displaying the extracted target key information in the target page in the floating window includes:
displaying a floating window on a user interface of the intelligent terminal, and respectively displaying the same type of target key information in each target page in each preset display area in the floating window;
or displaying a plurality of floating windows on a user interface of the intelligent terminal, and respectively displaying the target key information in each target page in each floating window;
or displaying a plurality of floating windows on a user interface of the intelligent terminal, and respectively displaying the same type of target key information in each target page in each floating window.
Optionally, processing the target key information based on the touch instruction for the floating window includes:
determining key information of the selected target based on the touch instruction of the floating window;
if the type of the selected target key information is a website, jumping to a webpage corresponding to the website;
if the type of the selected target key information is an application link, starting an application corresponding to the application link, and jumping to a preset page in the application;
if the type of the selected target key information is picture/expression bag/video/character, displaying a popup window comprising sharing, collecting, storing and deleting, and sharing, collecting, storing or deleting the key information based on a selection instruction of the popup window.
Further, the application also provides a terminal, which comprises a processor and a memory;
the memory is used for storing a computer program;
the processor, when executing the computer program, may perform the steps of the interaction method as described above.
Further, the present application also provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, can implement the steps of the interaction method as introduced above.
Compared with the prior art that a preset interface needs to be manually read to obtain a target object to be selected, the interaction method provided by the application can directly obtain and output the target object to be selected from the preset interface when the first operation information meeting the first preset condition is received, and correspondingly process when the second operation information of the target object to be selected is received, so that the interaction method can improve the information obtaining efficiency and improve the information obtaining speed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic hardware structure diagram of a mobile terminal implementing various embodiments of the present application;
fig. 2 is a communication network system architecture diagram according to an embodiment of the present application;
fig. 3 is a flowchart of a first interaction method provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of a default interface according to an embodiment of the present disclosure;
fig. 5 is a schematic view of a first touch area provided in the present embodiment;
fig. 6 is a schematic diagram of a second touch area provided in the embodiment of the present application;
fig. 7 is a schematic output diagram of a target object to be selected according to an embodiment of the present application;
fig. 8 is a flowchart of a second interaction method provided in the embodiment of the present application;
fig. 9 is a flowchart of a third interaction method provided in the embodiment of the present application;
fig. 10 is a schematic diagram of a display interface of a first intelligent terminal in an interaction method provided in the embodiment of the present application;
fig. 11 is a schematic diagram of a display interface of a second intelligent terminal in an interaction method provided in the embodiment of the present application;
fig. 12 is a schematic diagram of a display interface of a third intelligent terminal in an interaction method provided in the embodiment of the present application;
fig. 13 is a schematic diagram of a fourth intelligent terminal display interface in the interaction method provided in the embodiment of the present application.
Fig. 14 is a schematic diagram of a display interface of a fifth intelligent terminal in the interaction method provided in the embodiment of the present application;
fig. 15 is a schematic diagram of a display interface of a sixth intelligent terminal in an interaction method provided in the embodiment of the present application;
fig. 16 is a schematic diagram of a display interface of a seventh intelligent terminal in an interaction method provided in the embodiment of the present application;
fig. 17 is a schematic interaction diagram of an interaction method provided in the embodiment of the present application;
fig. 18 is a schematic structural diagram of a terminal according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings. With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
It should be noted that step numbers such as S11 and S12 are used herein for the purpose of more clearly and briefly describing the corresponding content, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S12 first and then S11 in specific implementation, which should be within the scope of the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning by themselves. Thus, "module", "component" or "unit" may be used mixedly.
The terminal may be implemented in various forms. For example, the terminal described in the present application may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and a fixed terminal such as a Digital TV, a desktop computer, and the like.
The following description will be given taking a mobile terminal as an example, and it will be understood by those skilled in the art that the configuration according to the embodiment of the present application can be applied to a fixed type terminal in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present application, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex Long Term Evolution), and TDD-LTE (Time Division duplex Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present application, a communication network system on which the mobile terminal of the present application is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present disclosure, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present application is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and communication network system, various embodiments of the present application are provided. In order to make the purpose, features and advantages of the present application more obvious and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment provides an interaction method, which can be applied to an intelligent terminal and can also be applied to a computer terminal in some examples. Referring to fig. 3, the interaction method provided in this embodiment includes:
and S11, receiving first operation information aiming at the preset interface.
The preset interface can be a web interface or an application interface on the mobile terminal, or an interface containing preset contents such as text, pictures, websites, videos, audios and the like. The interfaces (preset interfaces) may include content such as text, files, pictures, emoticons, video, audio, applications, websites, links, etc. Referring to fig. 4, fig. 4 shows an application interface of a B application on a mobile terminal, where the application interface includes preset contents such as text, a website, a link, and a video.
When a user wants to obtain contents of characters, files and the like on a preset interface, the preset interface can be operated to determine a target object, in some examples of the application, the user can operate the preset interface, in the embodiment of the application, the operation on the preset interface is referred to as a first operation, the first operation can be touch control of the user on the mobile terminal or voice control of the user on the mobile terminal, and the corresponding first operation information can be a touch control gesture, a touch control area, an operation track and a voice instruction of the voice control for performing the touch control operation.
The touch gesture may be a stroke gesture of an up stroke, a down stroke, a left stroke, a right stroke, and the like on a display screen of the mobile terminal, or a click gesture of a single click, a multiple click, and the like, or a long-press gesture in which the touch time meets a preset duration, and of course, the touch here may be a finger touch or a touch of a stylus.
The touch area may be an area on a display screen of the mobile terminal, may be a middle area of the display screen, may also be an area near the display screen, and may also be an area near the top of the display screen and an area near the left and right, please refer to fig. 5 and 6, an area a and an area b in fig. 5, and an area c to an area f in fig. 6 may be areas preset on the mobile terminal. In some examples, the user may drag the displayed message to a preset area of the display screen to trigger the source application of the message to switch the running mode of the message source application
The operation track may be a track left by the user in the operation on the touch screen of the mobile terminal, and may be a track of a gesture, a touch track acted by a stylus, or the like. The operation track can be embodied by track parameters, and the track parameters are attribute parameters of each operation track, and can include a starting position and an ending position of the operation track, an area acted by the operation track and/or track forming speed, and parameters such as a direction of the operation track, a track curve included angle, forming time of the operation track, length of the operation track and the like. For the trajectory parameter of the operation trajectory being the trajectory forming speed, the trajectory forming speed may be determined according to the start position and the end position of the operation trajectory and the forming duration of the trajectory in the embodiment of the present application, and in some examples, if the forming speed of the operation trajectory is fast, the operation mode of the source application of the message is switched. For the area on which the trajectory parameter of the operation trajectory is acted by the operation trajectory, the area on which the operation trajectory is acted may also be the area on which the start position of the operation trajectory is acted and the area on which the end position of the operation trajectory is acted, and the operation mode of the source application of the message may be switched according to the area on which the start position is acted and the area on which the end position is acted. In some examples, the region acted on by the operation track may be the region acted on by the whole track, for example, when the whole track acts on one region, the operation mode of the source application of the message is switched, or when the whole track acts on two regions, the operation mode of the source application of the message is switched. For example, if the operation trajectory is slid up or down from the bottom, the operation mode of the source application of the message is switched. For example, if the operation trace is formed for a long time and there is a pause in the trace forming process, the operation mode of the source application of the message is switched.
The user can carry out voice control on the mobile terminal through the voice instruction, the voice instruction can send out a section of voice containing 'operation mode switching' for the user, and when the mobile terminal receives the voice based on the microphone and analyzes the voice instruction to obtain the operation mode switching, the mobile terminal switches the operation mode of the source application of the message.
And S12, if the first operation information meets a first preset condition, acquiring and outputting the target object to be selected from the preset interface.
In this embodiment of the application, the first preset condition includes that the operation gesture is a preset gesture, the operation position is a preset position, the operation trajectory is a preset trajectory, and the voice command of the voice control is at least one of preset commands, it should be noted that the preset gesture, the preset region, the preset trajectory and the preset command may refer to corresponding conditions preset on the mobile terminal, and when the operation information (the touch gesture, the preset region, the preset trajectory and the preset command) meets the corresponding conditions, the mobile terminal is triggered to determine the target object from the preset interface and output the target object.
The target object is introduced before introducing the target object to be selected, the target object is one or more contents in characters, files, pictures, emoticons, videos, audios, applications, websites, links and the like on an interface of the mobile terminal, and the contents (the target object) exist in an application interface, a webpage interface and an interface containing the contents on the mobile terminal. The target object to be selected refers to a target object determined based on the first operation information of the user. Determining that the target object to be selected corresponding to the first operation information can have different modes:
the candidate target object is determined based on the first operation information, for example, different first operation information corresponds to different candidate target objects, in this example, the candidate target object may be determined based on the first operation information. Taking the first operation information as a touch gesture as an example, when the first operation information is an up stroke, the corresponding target object to be selected can be determined as characters in the preset interface, when the first operation information is a down stroke, the corresponding target object to be selected is determined as a picture in the preset interface, and when the first operation information is a left stroke, the corresponding target object to be selected is determined as a website in the preset interface.
In some other examples, an object (e.g., a text, a file, a picture, an emoticon, a video, an audio, an application, a website, a link, etc.) acted by the first operation information may be determined as a candidate target object in the preset interface, and at this time, the target information may be determined based on the first operation information, so that the target information may be obtained from the preset interface and output as the candidate target object.
The output target object to be selected may be a content of a text, a file, a picture, an emoticon, a video, an audio, an application, a website, a link, and the like on a preset interface acquired by the mobile terminal, and in some examples, the target object to be selected may be displayed on a pop-up window or a new webpage or an interface (see fig. 7, the website, the video, and the link output from the application B are shown in fig. 7), or the target object to be selected may be marked on the preset interface, and the marking includes a highlight or/and an underline or/and a bold target object to be selected, and a text content corresponding to the target object to be selected.
And S13, receiving second operation information aiming at the target object to be selected, and executing corresponding processing.
And correspondingly processing, including copying, sharing, collecting, storing, downloading and the like of the target object to be selected, wherein if the target object to be selected is a link, the corresponding processing refers to opening the link, if the target object to be selected is an application, the corresponding processing refers to opening the application, and if the target object to be selected is a video or/and an audio, the corresponding processing can be playing the video or/and the audio, and specifically, the corresponding processing needs to be determined according to the target object to be selected in the actual operation.
The user can determine the target object to be selected, which needs to be processed, through a second operation, the second operation information may be one of a long-press operation, a dragging operation, and a sliding operation, and correspondingly, if the second operation satisfies a second preset condition, the long-press operation may reach a preset time length, the long-press operation may reach a preset pressure, the dragging operation may reach a preset dragging direction, and the sliding operation may be a preset sliding track.
Compared with the prior art that the target object to be selected on the preset interface needs to be obtained by manually reading the preset interface, the interaction method provided by the application can directly obtain and output the target object to be selected from the preset interface when first operation information meeting a first preset condition is received, and performs corresponding processing when second operation information of the target object to be selected is received, so that the interaction method can improve the information obtaining efficiency and improve the information obtaining speed.
Further embodiments of the key information extraction method provided by the present invention will be described based on the above description.
Referring to fig. 8, an interaction method provided in an embodiment of the present application includes the following steps:
and S21, judging whether the current state of the mobile terminal is a preset state, if so, executing a step S22, and otherwise, stopping executing the steps of the interaction method.
The preset state comprises that the mobile terminal opens an intelligent mode, opens a preset application and displays a preset interface. It can be understood that, in some examples, the mobile terminal may implement the interaction method provided by the embodiment of the present application only when the mobile terminal is in the intelligent mode, and the mobile terminal may not implement the interaction method when the intelligent mode is not in the intelligent mode. The intelligent mode can be opened in a factory setting or a selection setting by a user. In some other examples, when the mobile terminal opens the preset application, the mobile terminal may implement the interaction method based on the preset interface of the preset application. In some other examples, the mobile terminal may be triggered to display the preset interface when the mobile terminal displays the preset interface, in this embodiment of the application, the preset interface may be a web interface or an application interface on the mobile terminal, or include preset contents such as a text, a picture, a website, a video, and an audio (for example, a blog, a microblog, and the like, and the contents on the preset interface include characters, pictures, videos, and the like, and may be copied, stored, played, and the like). In some examples, the mobile terminal may perform each step of the interaction method provided in the embodiment of the present application only when the smart mode is started, the preset application is started, and the application interface of the application is displayed.
And S22, receiving first operation information aiming at the preset interface.
The preset interface can be a web interface or an application interface on the mobile terminal, or an interface containing preset contents such as text, pictures, websites, videos, audios and the like. The interfaces (preset interfaces) may include content such as text, files, pictures, emoticons, video, audio, applications, websites, links, etc. When a user wants to obtain contents of characters, files and the like on a preset interface, the preset interface can be operated to determine a target object, in some examples of the application, the user can operate the preset interface, in the embodiment of the application, the operation on the preset interface is referred to as a first operation, the first operation can be touch control of the user on the mobile terminal or voice control of the user on the mobile terminal, and the corresponding first operation information can be a touch control gesture, a touch control area, an operation track and a voice instruction of the voice control for performing the touch control operation.
The touch gesture may be a stroke gesture of an up stroke, a down stroke, a left stroke, a right stroke, and the like on a display screen of the mobile terminal, or a click gesture of a single click, a multiple click, and the like, or a long-press gesture in which the touch time meets a preset duration, and of course, the touch here may be a finger touch or a touch of a stylus.
The touch area is an area on a display screen of the mobile terminal, can be a middle area of the display screen, can also be an area near the edge of the display screen, and can also be an area near the upper edge of the display screen and an area near the left and the right. For example, when the user touches the left area of the display screen, the operation mode of the message source application is triggered to switch. In some examples, the user may drag the displayed message to the edge area of the display screen to trigger the source application of the message to switch the running mode of the message source application.
The operation track may be a track left by the user in the operation on the touch screen of the mobile terminal, and may be a track of a gesture, a touch track acted by a stylus, or the like. The operation track can be embodied by track parameters, and the track parameters are attribute parameters of each operation track, and can include a starting position and an ending position of the operation track, an area acted by the operation track and/or track forming speed, and parameters such as a direction of the operation track, a track curve included angle, forming time of the operation track, length of the operation track and the like. For the trajectory parameter of the operation trajectory being the trajectory forming speed, the trajectory forming speed may be determined according to the start position and the end position of the operation trajectory and the forming duration of the trajectory in the embodiment of the present application, and in some examples, if the forming speed of the operation trajectory is fast, the operation mode of the source application of the message is switched. For the area on which the trajectory parameter of the operation trajectory is acted by the operation trajectory, the area on which the operation trajectory is acted may also be the area on which the start position of the operation trajectory is acted and the area on which the end position of the operation trajectory is acted, and the operation mode of the source application of the message may be switched according to the area on which the start position is acted and the area on which the end position is acted. In some examples, the region acted on by the operation track may be the region acted on by the whole track, for example, when the whole track acts on one region, the operation mode of the source application of the message is switched, or when the whole track acts on two regions, the operation mode of the source application of the message is switched. For example, if the operation trajectory is slid up or down from the bottom, the operation mode of the source application of the message is switched. For example, if the operation trace is formed for a long time and there is a pause in the trace forming process, the operation mode of the source application of the message is switched.
The user can carry out voice control on the mobile terminal through the voice instruction, the voice instruction can send out a section of voice containing 'operation mode switching' for the user, and when the mobile terminal receives the voice based on the microphone and analyzes the voice instruction to obtain the operation mode switching, the mobile terminal switches the operation mode of the source application of the message.
And S23, determining a target object to be selected corresponding to the first operation information, and outputting the target object to be selected.
Before introducing the target object to be selected, the target object is introduced, the target object is one or more contents in characters, files, pictures, emoticons, videos, audios, applications, websites, links and the like on an interface of the mobile terminal, and the contents (the target object) exist in an application interface, a webpage interface and an interface containing the contents on the mobile terminal. The target object to be selected refers to a target object determined based on the first operation information of the user. Determining that the target object to be selected corresponding to the first operation information can have different modes:
for example, in some examples, different first operation information corresponds to different candidate target objects, in which case the candidate target objects may be determined based on the first operation information. Taking the first operation information as a touch gesture as an example, when the first operation information is an up stroke, the corresponding target object to be selected can be determined as characters in the preset interface, when the first operation information is a down stroke, the corresponding target object to be selected is determined as a picture in the preset interface, and when the first operation information is a left stroke, the corresponding target object to be selected is determined as a website in the preset interface.
In some other examples, an object (e.g., a text, a file, a picture, an emoticon, a video, an audio, an application, a website, a link, etc.) acted by the first operation information may be determined as a candidate target object in the preset interface, and at this time, the target information may be determined based on the first operation information, so that the target information may be obtained from the preset interface and output as the candidate target object. The output target object to be selected may be to show contents of text, files, pictures, emoticons, videos, audios, applications, websites, links, and the like on a preset interface acquired by the mobile terminal to a user, in some examples, the target object to be selected may be displayed on a pop-up window or a new webpage or an interface, or the target object to be selected may be marked on the preset interface, where the marking includes highlighting or/and underlining or/and bolding the target object to be selected, and text content corresponding to the target object to be selected.
In the above two examples, the target object to be selected in the preset interface is determined based on the first operation information, but in other examples, the mobile terminal needs to determine and output the target object from the preset interface after determining that the first operation information satisfies the first preset condition.
In this embodiment of the application, the first preset condition includes that the operation gesture is a preset gesture, the operation position is a preset position, the operation trajectory is a preset trajectory, and the voice command of the voice control is at least one of preset commands, it should be noted that the preset gesture, the preset region, the preset trajectory and the preset command may refer to corresponding conditions preset on the mobile terminal, and when the operation information (the touch gesture, the preset region, the preset trajectory and the preset command) meets the corresponding conditions, the mobile terminal is triggered to determine the target object from the preset interface and output the target object.
And S24, judging whether the third operation information received for the target object to be selected meets a third preset condition.
And S25, if yes, performing preset operation on the target object to be selected.
The third operation information and the second operation information mentioned later in the embodiment of the present application may include an operation position, an operation track, and an operation gesture, as well as the first operation information described above, and may also be a voice control in some examples. The specific operation position, operation track, operation gesture, voice control instruction, and interpretation of the corresponding preset condition may refer to the above description of the first operation information.
The preset operation in step S25 may be to adjust the order of the target objects to be selected, adjust the output mode of the target objects to be selected, and adjust the output information of the target objects to be selected. It can be understood that the output target objects to be selected may be out of order, or may be sorted according to the target objects to be selected in a preset interface, or may be sorted according to the types (characters, files, pictures, emoticons, videos, and the like) of the target objects to be selected in some examples, and in the actual operation process, the user may adjust the sorting of the target objects to be selected. The output mode of the target object to be selected comprises displaying the target object to be selected in a pop-up window, displaying the target object to be selected in a webpage different from a preset interface, and labeling (highlighting or/and underlining or/and bolding) the target object to be selected in the preset interface. The output information of the target object to be selected, that is, the output target object to be selected, may be the target object to be selected itself (such as a text, a file, a picture, an emoticon, a video, etc.), the number of times that the target object to be selected appears, or may be the type of the target object to be selected and the number of times that the corresponding type appears in the preset interface, etc.
And S26, receiving second operation information aiming at the target object to be selected, and executing corresponding processing.
After the target object to be selected is output, the user may process the target object to be selected, that is, the corresponding processing in step S26 includes copying, sharing, collecting, saving, downloading, and the like of the target object to be selected, if the target object to be selected is a link, the corresponding processing refers to opening the link, if the target object to be selected is an application, the corresponding processing refers to opening the application, if the target object to be selected is a video or/and an audio, the corresponding processing may be playing the video or/and playing the audio, and specifically, which one of the processing needs to be determined according to the target object to be selected in the actual operation.
In some examples, the user may determine the target object to be selected, which needs to be processed, through a second operation, the second operation information may be one of a long-press operation, a dragging operation, and a sliding operation, and correspondingly, if the second operation satisfies a second preset condition, the second operation may refer to that the long-press operation reaches a preset time length, the long-press operation reaches a preset pressure, the dragging operation reaches a preset dragging direction, and the sliding operation is a preset sliding track.
In some other examples, step S13 may include determining whether a target object is determined according to second operation information, if so, determining whether the second operation information satisfies a second preset condition, and if so, performing processing corresponding to the second operation information on the target object.
The embodiment of the application also provides an interaction method, which can be applied to an intelligent terminal and can also be applied to a computer terminal in some examples. Referring to fig. 9, the interaction method provided in this embodiment includes:
s101, determining a target page to be extracted.
The target page is a page from which key information is to be extracted, and may be a page in an application program, including a web page of a browser, an article page of an application program, a Word document, an Excel document, a PPT document, a text document, and other documents. In addition, the target page may also be a page opened, and may be a page corresponding to a website, a link, and the like. There may also be various types of documents stored in the folder, and in this example, step S101 may regard all documents in the target file as target pages to be extracted based on the determined target folder.
S102, receiving an input instruction of the target page, and determining target key information to be extracted of the target page based on the input instruction.
The input instruction can be a voice instruction, a character input instruction, a touch instruction and the like. The touch instruction can be an instruction selected by a user from a list such as a long-time pressing operation instruction, a single-click operation instruction, a double-click operation instruction, a sliding operation instruction and the like of the intelligent terminal.
The target key information is information to be extracted from a target page, and can be a website, a text, a picture (including a two-dimensional code), an emoticon, a video, an application connection and a file.
In some examples, a voice instruction can be recorded based on a microphone on the intelligent terminal, and in this example, the voice instruction is a segment of voice, and the voice instruction is analyzed to obtain voice information therein. For example, if the parsed voice information is "there is a meeting in the tomorrow, and the information about litigation in the file a is looked up", the target key information can be determined as litigation.
In some other examples, a character input instruction for the target page may be received based on the input device, and the character input by the user is entered at this time, and the input character may be used as the target key information to be extracted from the target page.
S103, extracting target key information in the target page.
After the target page and the target key information are determined, the target page can be inquired, and the target key information in the target page is extracted. For example, if the target key information is a website, all websites in the target page may be extracted.
In some examples, all information in the target page may be queried, each piece of information in the target page that satisfies the target key information may be determined as alternative information, and then the target key information may be determined from the alternative information. For example, removing repeated information in each candidate information, and using the candidate information after the repetition removal as the target key information in the extracted target page; for another example, the number of times of repeated occurrence of the candidate information is determined, and the first N candidate information are taken as the target key information extracted from the target page, where N is a positive integer greater than 1.
After step S103, some examples may also output a key information extraction report, the type of which is a Word document, an Excel document, or/and a PPT document. It should be understood that at least the target page field and the target key information field are included in the output key information extraction report for recording.
In other examples, the target key information extracted in step S103 may be processed again, including sharing, collecting, maintaining, and opening, for example, providing a website opening website page, or opening a related application through an application connection.
The embodiment provides an interaction method, and after determining a target page to be extracted and determining target key information to be extracted of the target page based on an input instruction of the target extraction page, the target key information can be extracted from the target page.
Further embodiments of the interaction method provided by the present application will be described further below on the basis of the above description.
In another interaction method provided in the embodiment of the present application, for convenience of introduction of the method, the method is mainly applied to an intelligent terminal for introduction. The method provided by the embodiment comprises the following steps:
s201, determining a current opening page in the target application program.
In this embodiment, the target application is an application selected on the intelligent terminal for extracting the key information, and the currently opened page is a currently opened page of the application, and includes a web page currently displayed on the display interface of the intelligent terminal and a page opened on the background but not displayed on the display interface of the intelligent terminal. For example, a browser on the smart terminal opens multiple pages, including the currently displayed page, and opened but not displayed pages.
It should be further understood that, if the method is applied to a computer, where the target application is an application on the computer, such as Office, and the current open page here may refer to one or more open PPT documents, Word documents, and Excel documents in Office.
S202, determining a target page to be extracted from the current open page.
In this embodiment, a plurality of currently opened pages on the target application program are included, and then a target page to be extracted needs to be determined from the currently opened pages, where the determined target page may be one or more. In some examples, a user selection instruction of a currently opened page may be received, and the selected currently opened page is taken as a target page to be extracted. Referring to fig. 10, the currently opened page on the application program B (target application) on the smart terminal in the figure is three pages, that is, page 1, page 2, and page 3, and when the user clicks page 2, it may be determined that page 2 is the target page to be extracted. It should be noted that the selection instruction herein refers to an instruction for selecting from one or more currently opened pages to determine a target page, and may be an instruction for removing a non-target page in addition to a general touch instruction such as a single click, a double click, a long press, and the like, where the remaining current page is the target page.
In some other examples, each currently opened page in the target application program may also be used as a target page from which key information is to be extracted, where the extracted key information is key information in each currently opened page.
S203, receiving a touch instruction of an upper target page of the intelligent terminal.
It is understood that the target page contains many different kinds of information, such as text, web address, picture, emoticon, video, application link, etc., please refer to fig. 11, where fig. 11 shows a page 2 in the application program B, where the page includes four kinds of information, i.e., text, web address, video, and application link. After the target page is determined, the touch instruction of the user to the target page can be received, and the key information concerned by the user can be determined from the information. In this embodiment, the information types of the key information include characters, websites, pictures, emoticons, videos, and application links. The voice link is triggered to respond to some things corresponding to the application program corresponding to the application link, for example, a popup window of a download page of the application program.
And S204, determining the information type of the target key information based on the touch instruction.
And S205, taking the key information corresponding to the information type on the target page as the target key information to be extracted from the target page.
It can be understood that the key information extracted by the user may be one type of key information, for example, only the website or the picture in a certain webpage is extracted, or multiple types of key information, for example, multiple types of information such as the website, the picture, the video, and the like in a certain webpage, and the information type of the target key information may be determined based on the information type of the key information that the user wants to extract, which is determined by the touch instruction issued by the user, and then the key information of the corresponding information type in the target page is used as the target key information to be extracted from the target page. For example, the user may touch two types of information, namely, the website and the video, in the target page, and in this example, the information of the target page whose information type satisfies both the website and the video may be both used as the target key information on the target page.
Different examples have different implementations on how to determine, based on the received touch instruction, how the user needs to extract the key information category:
in some examples, touch areas may be divided on a display interface of the smart terminal, and after receiving a touch instruction of a user, by determining a touch position of the touch instruction on the smart terminal, a touch area on the smart terminal corresponding to the touch position may be further determined, and an information category associated with the touch area may be used as the determined one or more types of key information. It should be understood that the type of information associated with the touch area may be preset. Please refer to fig. 12, which illustrates two touch areas on the display interface of the intelligent terminal, including an area a and an area b, where the area a corresponds to key information of a website, and the area b corresponds to key information of a video, and when a user touches the area a on the display interface, it may be determined that the user selects the website key information; when the user touches the area a and the area b on the display interface, the user can be determined to select two types of key information, namely the website and the video.
In some other examples, the selected information on the target page may be determined according to a touch instruction of the user on the information on the target page, and the information category corresponding to the selected information may be used as the determined one or more types of key information. Referring to fig. 11, if the touch instruction of the user determines that the application links in the page 2 are the selected information, all the application links in the target page may be used as the key information.
It should be understood that after determining and querying the key information in the target page, the method may further include a step of performing secondary processing on the key information, including:
s206, inquiring all information in the target page, and determining each piece of information which meets the key information in the target page as alternative information.
And S207, removing repeated information in each candidate information, and taking the removed candidate information as key information in the extracted target page.
It can be understood that, sometimes, the same information may repeatedly appear in the same page, so that in this embodiment, the key information of the queried target page may also be deduplicated, that is, the key information repeatedly appearing in the target page is removed, and the key information obtained after deduplication is the key information (target key information) that the user wants to extract.
In some other examples, the number of times of repeated occurrences of the alternative information in step S206 may also be determined, and the alternative information whose number of repeated occurrences is the top N is used as the key information in the extracted target page, so that the number of occurrences of the extracted target key information can be approximately known at the same time, where N is a positive integer greater than 1.
After the target key information in the target page is extracted, the method further comprises reprocessing the target key information, including outputting a key information extraction report based on the target key information, where the key information extraction report may be a Word document, an Excel document, or/and a PPT document. In addition, after extracting the target key information in the target page, the method may further include:
s208, displaying a floating window on the user interface of the intelligent terminal, and displaying the extracted key information in the target page in the floating window.
For step S208, at least several schemes may be included as follows:
scheme 1: and displaying a floating window on a user interface of the intelligent terminal, and respectively displaying the same type of key information in each target page in each preset display area in the floating window. Referring to fig. 13, in this example, a floating window a and four preset display areas on the floating window are provided on the display interface of the intelligent terminal, where each preset display area includes an area c, an area d, an area e, and an area f, and each preset display area may respectively display key information extracted from a target page in the same category, for example, the area c may display key information of a website category in the target page, the area d may display key information of an application link category in the target page, and the area e may display key information of a video category in the target page, and so on.
Scheme 2: displaying a plurality of floating windows on a user interface of the intelligent terminal, and respectively displaying the same type of key information in each target page in each floating window. Referring to fig. 14, in this example, three floating windows are provided on the display interface of the intelligent terminal, where each floating window includes a floating window b, a floating window c, and a floating window d, and each floating window may respectively display key information extracted from the target page and of the same type, for example, the floating window b may display key information of a website class in the target page, the floating window c may display key information of a video class in the target page, and the floating window d may display key information of an application link class in the target page.
For the case that there are multiple target pages, step S208 may be to display multiple floating windows on the user interface of the intelligent terminal, and display the key information in each target page in each floating window respectively. Referring to fig. 15, in this example, three floating windows are provided on the display interface of the intelligent terminal, including a floating window e, a floating window f, and a floating window g, each floating window may respectively display the key information extracted from the target page of the same type, for example, the floating windows e-g may respectively display the key information extracted from the pages 1-3 of the target application B.
For the three technical schemes, the target key information displayed in the floating window can be displayed in a sequencing manner, the date and time of each target page can be compared, and the key information of each target page can be displayed in the floating window according to the sequence of the date and time of the target page.
It should be understood that after the key information in the target page is extracted in step S207 and displayed on the smart terminal, the processing of the key information displayed on the smart terminal may also be included, including:
s209, receiving a touch instruction of the floating window, and determining the selected key information for processing.
If the type of the selected key information is the website, jumping to a webpage corresponding to the website; if the type of the selected key information is the application link, starting the application corresponding to the application link, and jumping to a preset page in the application (the page corresponding to the application link can be an application download page, a benefit pickup page and the like); if the type of the selected key information is pictures, emoticons, videos or characters, displaying a popup window comprising sharing, collecting, storing and deleting, receiving a touch instruction of a user to share, collect, store or delete a key in the popup window, and correspondingly sharing, collecting, storing or deleting the selected key information.
Based on the interaction method provided by the embodiment, the information acquisition efficiency can be improved, the information acquisition speed is higher, and the extracted key information can be subjected to secondary processing, so that convenience is brought to users.
In these embodiments, the target page may be a stored document, including a local article page (after downloading a browser and a news application), a Word document, an Excel document, a PPT document, a PDF document, and various files (such as a chat record document and a download document) stored locally by an application, and in this embodiment, the interaction method provided by the present application includes:
s301, determining a target folder based on the selection instruction, and taking all documents in the target folder as target pages to be extracted.
And receiving a selection instruction of a user on the folder, taking the selected folder as a target folder, and taking each document stored in the target folder as a target page to be extracted at the moment.
S302, displaying at least one information category of the target key information on a display interface of the intelligent terminal.
And displaying at least one information category of the key information for a user to select, wherein the information category of the key information comprises characters, websites, pictures, emoticons, videos and application links. Referring to fig. 16, which shows information in the C document, the information categories of the key information displayed in this example include web addresses, application links, pictures, and videos, and particularly to the dashed box in fig. 16.
And S303, receiving a sliding operation instruction on the display interface of the intelligent terminal, and determining the sliding distance of the sliding operation instruction on the intelligent terminal.
S304, determining the selected information type based on the sliding distance;
s305, taking the key information corresponding to the information type in the target page as the target key information to be extracted from the target page.
That is, when the user's finger is lifted, the dimension information selected by the user can be calculated from the slide step length/slide distance of the slide operation (instruction) thereof. Referring to fig. 17, when the user slides the edge of the screen downwards, when the user's finger is lifted/stopped, the user may calculate the sequence information selected by the user according to the sliding step length/sliding distance of the sliding operation (instruction), and display the sorting result for any list content to be sorted according to the dimension information and the sequence information, in the example shown in fig. 7, the information category of the selected key information is switched from the website to the picture.
And S306, extracting the target key information in the target page.
The embodiment provides an interaction method, which can at least perform quick selection on information types based on a sliding operation instruction of a user, so that user experience is improved.
The present embodiment further provides a terminal, as shown in fig. 18, which includes a processor 101, a memory 101, and a communication bus 101, where:
the communication bus 101 is used for realizing connection communication between the processor 101 and the memory 101;
the processor 101 is configured to execute the key information extraction program stored in the memory 101 to implement the steps of the interaction method in the embodiments described above.
Embodiments of the present application also provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method as described in the above various possible embodiments.
An embodiment of the present application further provides a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method described in the above various possible embodiments.
The present embodiment also provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the interaction method in the embodiments described above.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the description of each embodiment has its own emphasis, and parts of a certain embodiment that are not described in detail can be referred to the related descriptions of other embodiments, and the above serial numbers of the embodiments of the present application are merely for description and do not represent advantages and disadvantages of the embodiments, and those skilled in the art can make many forms without departing from the spirit and scope of the present application and claims, and these forms are all within the protection scope of the present application.

Claims (16)

1. An interactive method, characterized in that the method comprises:
s11, receiving first operation information aiming at a preset interface;
s12, if the first operation information meets a first preset condition, acquiring and outputting a target object to be selected from the preset interface;
and S13, receiving second operation information aiming at the target object to be selected, and executing corresponding processing.
2. The method of claim 1, wherein the first operational information comprises at least one of:
operation gestures, operation positions, operation tracks, operation duration and voice control; and/or the presence of a gas in the gas,
the first preset condition comprises at least one of the following:
the operation gesture is a preset gesture;
the operation position is a preset position;
the operation track is a preset track;
the voice command controlled by voice is a preset command.
3. The method of claim 2, wherein the step of outputting the candidate target object is preceded by:
and determining a target object to be selected corresponding to the first operation information.
4. The method of claim 3, wherein determining the target object to be selected corresponding to the first operation information comprises:
determining target information based on the first operation information;
and acquiring the target information as a target object to be selected and outputting the target information.
5. The method according to any one of claims 1 to 3, wherein the step of S13 includes:
judging whether the target object is determined according to the second operation information or not;
if yes, judging whether the second operation information meets a second preset condition or not;
and if so, executing processing corresponding to the second operation information on the target object.
6. The method of claim 5, wherein the second operational information comprises at least one of:
long press operation, heavy press operation, drag operation, and slide operation; and/or the presence of a gas in the gas,
the second preset condition comprises at least one of the following:
the long pressing operation reaches a preset duration;
the heavy pressing operation reaches a preset pressure;
the dragging operation is a preset dragging direction;
the sliding operation is a preset sliding track.
7. The method of any of claims 1 to 3, wherein the predetermined interface comprises at least one of:
a web page interface;
an application interface;
an interface containing preset content, the preset content comprising: at least one of text, picture, web address, video, audio.
8. The method of any one of claims 1 to 3, wherein the target object comprises at least one of:
text, files, pictures, emoticons, video, audio, applications, web sites, links.
9. The method of any of claims 1 to 3, wherein the processing comprises at least one of:
copying, sharing, collecting, saving, downloading, opening a link, opening an application, playing a video and playing an audio.
10. The method according to any one of claims 1 to 3, wherein the step of S12 is preceded by the step of:
receiving third operation information aiming at a target object to be selected;
and if the third operation information meets a third preset condition, performing preset operation on the target object to be selected.
11. The method of claim 10, wherein the third operational information comprises at least one of:
an operation position, an operation track and an operation gesture; and/or the presence of a gas in the gas,
the third preset condition comprises at least one of the following:
the operation position is a preset position;
the operation track is a preset track;
the operation gesture is a preset gesture.
12. The method of claim 10, wherein the pre-set operation comprises at least one of:
adjusting the sequence of the target objects to be selected;
adjusting the output mode of the target object to be selected;
and adjusting the output information of the target object to be selected.
13. The method according to any one of claims 1 to 3, wherein the step of S11 is preceded by the steps of:
judging whether the current state of the mobile terminal is a preset state or not;
if yes, the step S11 is executed.
14. The method of claim 13, wherein the preset state comprises at least one of:
the terminal starts an intelligent mode;
the terminal starts a preset application;
and the terminal displays a preset interface.
15. A terminal, characterized in that the terminal comprises a processor, a memory;
the memory is used for storing a computer program;
the processor is adapted to carry out the steps of the interaction method according to any one of claims 1 to 14 when executing the computer program.
16. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the steps of the interaction method according to any one of claims 1 to 14.
CN202011108069.8A 2020-10-16 2020-10-16 Interaction method, terminal and computer readable storage medium Pending CN112199019A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011108069.8A CN112199019A (en) 2020-10-16 2020-10-16 Interaction method, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011108069.8A CN112199019A (en) 2020-10-16 2020-10-16 Interaction method, terminal and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112199019A true CN112199019A (en) 2021-01-08

Family

ID=74009696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011108069.8A Pending CN112199019A (en) 2020-10-16 2020-10-16 Interaction method, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112199019A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113325981A (en) * 2021-06-07 2021-08-31 上海传英信息技术有限公司 Processing method, mobile terminal and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102135886A (en) * 2011-03-24 2011-07-27 汉王科技股份有限公司 Method, device and electronic equipment for sharing page content
CN105549895A (en) * 2016-02-01 2016-05-04 广东欧珀移动通信有限公司 Application control method and mobile terminal
CN105739886A (en) * 2016-01-25 2016-07-06 广东欧珀移动通信有限公司 Video playing method and mobile terminal
CN105791592A (en) * 2016-04-29 2016-07-20 努比亚技术有限公司 Information prompting method and mobile terminal
CN107819939A (en) * 2017-10-24 2018-03-20 努比亚技术有限公司 A kind of information acquisition method, terminal and computer-readable recording medium
US20190129923A1 (en) * 2016-08-26 2019-05-02 Tencent Technology (Shenzhen) Company Limited Method and appratus for playing video in independent window by browser, and storage medium
CN109960446A (en) * 2017-12-25 2019-07-02 华为终端有限公司 It is a kind of to control the method and terminal device that selected object is shown in application interface
CN110209319A (en) * 2019-05-21 2019-09-06 掌阅科技股份有限公司 The display methods of page info calculates equipment and computer storage medium
CN111026309A (en) * 2019-12-12 2020-04-17 上海传英信息技术有限公司 Intelligent terminal gesture copying method, terminal and medium
CN111338538A (en) * 2020-02-24 2020-06-26 广州视源电子科技股份有限公司 Page operation method, device, equipment and storage medium of intelligent interactive tablet

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102135886A (en) * 2011-03-24 2011-07-27 汉王科技股份有限公司 Method, device and electronic equipment for sharing page content
CN105739886A (en) * 2016-01-25 2016-07-06 广东欧珀移动通信有限公司 Video playing method and mobile terminal
CN105549895A (en) * 2016-02-01 2016-05-04 广东欧珀移动通信有限公司 Application control method and mobile terminal
CN105791592A (en) * 2016-04-29 2016-07-20 努比亚技术有限公司 Information prompting method and mobile terminal
US20190129923A1 (en) * 2016-08-26 2019-05-02 Tencent Technology (Shenzhen) Company Limited Method and appratus for playing video in independent window by browser, and storage medium
CN107819939A (en) * 2017-10-24 2018-03-20 努比亚技术有限公司 A kind of information acquisition method, terminal and computer-readable recording medium
CN109960446A (en) * 2017-12-25 2019-07-02 华为终端有限公司 It is a kind of to control the method and terminal device that selected object is shown in application interface
CN110209319A (en) * 2019-05-21 2019-09-06 掌阅科技股份有限公司 The display methods of page info calculates equipment and computer storage medium
CN111026309A (en) * 2019-12-12 2020-04-17 上海传英信息技术有限公司 Intelligent terminal gesture copying method, terminal and medium
CN111338538A (en) * 2020-02-24 2020-06-26 广州视源电子科技股份有限公司 Page operation method, device, equipment and storage medium of intelligent interactive tablet

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113325981A (en) * 2021-06-07 2021-08-31 上海传英信息技术有限公司 Processing method, mobile terminal and storage medium
CN113325981B (en) * 2021-06-07 2023-09-01 上海传英信息技术有限公司 Processing method, mobile terminal and storage medium

Similar Documents

Publication Publication Date Title
CN108037893B (en) Display control method and device of flexible screen and computer readable storage medium
CN110798397B (en) File sending method and device and electronic equipment
CN112416223A (en) Display method, electronic device and readable storage medium
CN112181233B (en) Message processing method, intelligent terminal and computer readable storage medium
CN111949146A (en) Shortcut function calling method and device for touch pen, touch pen and storage medium
CN112363648A (en) Shortcut display method, terminal and computer storage medium
CN112732134A (en) Information identification method, mobile terminal and storage medium
CN114398113A (en) Interface display method, intelligent terminal and storage medium
CN108153477B (en) Multi-touch operation method, mobile terminal and computer-readable storage medium
CN112199141A (en) Message processing method, terminal and computer readable storage medium
CN112068743A (en) Interaction method, terminal and storage medium
CN112199019A (en) Interaction method, terminal and computer readable storage medium
CN114860674B (en) File processing method, intelligent terminal and storage medium
CN107766544B (en) Information management method, terminal and computer readable storage medium
CN114442886A (en) Data processing method, intelligent terminal and storage medium
CN115494997A (en) Information reading method, intelligent terminal and storage medium
CN114647623A (en) Folder processing method, intelligent terminal and storage medium
CN114741361A (en) Processing method, intelligent terminal and storage medium
CN111353422B (en) Information extraction method and device and electronic equipment
CN113253896A (en) Interface interaction method, mobile terminal and storage medium
CN109656658B (en) Editing object processing method and device and computer readable storage medium
CN109543172B (en) Editing object regulation and control method, equipment and computer readable storage medium
CN117043730A (en) Processing method, mobile terminal and storage medium
CN107562317B (en) Display method and terminal
CN114722010B (en) Folder processing method, intelligent terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination