CN114510166B - Operation method, intelligent terminal and storage medium - Google Patents

Operation method, intelligent terminal and storage medium Download PDF

Info

Publication number
CN114510166B
CN114510166B CN202210338700.6A CN202210338700A CN114510166B CN 114510166 B CN114510166 B CN 114510166B CN 202210338700 A CN202210338700 A CN 202210338700A CN 114510166 B CN114510166 B CN 114510166B
Authority
CN
China
Prior art keywords
preset
processed
interface
application
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210338700.6A
Other languages
Chinese (zh)
Other versions
CN114510166A (en
Inventor
沈剑锋
汪智勇
李晨雄
黑啸吉
陈蓉
刘丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Transsion Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Transsion Holdings Co Ltd filed Critical Shenzhen Transsion Holdings Co Ltd
Priority to CN202210338700.6A priority Critical patent/CN114510166B/en
Publication of CN114510166A publication Critical patent/CN114510166A/en
Application granted granted Critical
Publication of CN114510166B publication Critical patent/CN114510166B/en
Priority to PCT/CN2023/078276 priority patent/WO2023169236A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment
    • G06F3/04186Touch location disambiguation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to an operation method, an intelligent terminal and a storage medium, wherein the method comprises the following steps: responding to a first operation on at least one object to be processed, and outputting the object to be processed on a first preset interface; and responding to the second operation, and performing first preset treatment on the object to be treated through at least one second application or service. According to the technical scheme, the preset interface is provided for displaying the object to be processed, data processing is further performed through operation on the preset interface, misoperation can be effectively reduced, and user experience is improved.

Description

Operation method, intelligent terminal and storage medium
Technical Field
The application relates to the technical field of user interaction, in particular to an operation method, an intelligent terminal and a storage medium.
Background
With the rapid development of terminal technology, the functions of mobile terminals such as mobile phones and tablet computers are also improved, and the mobile terminals become one of the common tools in daily life and work.
In the course of conceiving and implementing the present application, the inventors found that at least the following problems existed: in some implementations, a user is prone to misoperation when sharing data, for example: the clicking object is easy to select by mistake or select by omission; for another example, in the process of dragging the object, sharing cannot be continued due to object loosening, and thus user experience is affected.
The foregoing description is provided for general background information and does not necessarily constitute prior art.
Disclosure of Invention
In view of the above technical problems, the present application provides an operation method, an intelligent terminal and a storage medium, which provide a preset interface to display an object to be processed, and further perform data processing through operation on the preset interface, so that misoperation can be effectively reduced, and user experience is improved.
In order to solve the above technical problem, the present application provides an operating method, including the steps of:
s1: responding to a first operation on at least one object to be processed, and outputting the object to be processed on a first preset interface;
s3: and responding to the second operation, and performing first preset treatment on the object to be treated through at least one second application or service.
Optionally, the step S1 includes:
s11: responding to a first operation on at least one object to be processed, and acquiring object information of the object to be processed;
s12: and outputting the object to be processed on a first preset interface according to the object information.
Optionally, the step of S12, including at least one of:
outputting the object to be processed on a first preset interface according to the output form corresponding to the object information of the object to be processed;
when the number of the objects to be processed is at least two, outputting the associated objects to be processed in the same area in the first preset interface according to the object information;
and when the number of the objects to be processed is at least two, outputting the objects to be processed in the first preset interface according to a preset sequence according to the matching result of the object information and the use data.
Optionally, the outputting the associated object to be processed in the same area in the first preset interface according to the object information includes at least one of:
tiling and displaying the objects to be processed in each area;
adjusting the display ratio of each area according to the number of the objects to be processed in each area;
adjusting the display proportion of each area according to the type of the object to be processed in each area;
the different regions are displayed side by side or at least partially stacked.
Optionally, the step S3 includes:
s31: responding to a second operation, outputting a second preset interface, wherein optionally, the second preset interface comprises at least one second application or service identifier;
s32: and responding to that the third operation of the identifier of the second application or service meets a first preset condition, and performing first preset processing through the second application or service.
Optionally, the performing of the first preset treatment includes at least one of:
outputting the content and/or attribute of the corresponding object to be processed;
opening the content and/or attribute of the corresponding object to be processed through the second application or service;
and copying, applying, transferring or sharing the corresponding object to be processed to the second application or service.
Optionally, the method further comprises the steps of:
s2: and responding to a fourth operation, and performing second preset treatment on the object to be treated so as to update the object to be treated.
Optionally, the step of S2 is after the step of S1.
Optionally, the step of S2 precedes the step of S3.
Optionally, the step of S2 includes at least one of:
in response to an operation of selecting at least one part of content of the object to be processed, saving the selected content as the object to be processed on the first preset interface;
responding to the operation of splitting the content of the object to be processed, and saving the split content as the object to be processed on the first preset interface;
in response to an operation of combining at least a part of contents of at least two objects to be processed, saving the combined contents as the objects to be processed on the first preset interface;
responding to the operation of copying at least one part of content of the object to be processed, and saving the copied content as the object to be processed on the first preset interface;
in response to the operation of deleting at least part of the content of the object to be processed, saving the object to be processed after the content is deleted as the object to be processed on the first preset interface;
and responding to the operation of adding the object to be processed, and adding the object to be processed on the first preset interface.
Optionally, the adding, in response to the operation of adding the object to be processed, the object to be processed on the first preset interface includes:
responding to the operation of adding the object to be processed, and displaying an interface associated with a source interface of the existing object to be processed and/or at least one associated object associated with the existing object to be processed;
and responding to the condition that the operation on the source interface and/or the associated object meets a second preset condition, and adding an object to be processed corresponding to the operation on the first preset interface.
Optionally, the second preset condition is met, and the second preset condition includes at least one of the following:
dragging the object to the first preset interface;
dragging the object to a preset area different from the source interface and the first preset interface;
selecting an object and swinging towards the first preset interface;
the trajectory surrounds or covers at least one object.
The present application also provides an operating method comprising the steps of:
s10: responding to a first operation on at least one first object and at least one second object to meet a first preset condition, and outputting the first object and/or the second object in a preset form;
s30: and responding to a second operation of the at least one object in the preset form, wherein the second operation accords with a second preset condition, and preset processing is carried out through at least one second application or service.
Optionally, the preset configuration comprises at least one of: mark suspension and content suspension; and/or when the first object and/or the second object are output in a preset form, the first object and/or the second object move to a first preset area to be displayed or are displayed above the original position.
Optionally, the method further comprises the steps of:
s21: responding to that a third operation on the first object and/or the second object accords with a third preset condition, and outputting an editing interface of the corresponding object;
s22: and updating, deleting and/or adding at least one object in the preset form in response to the operation on the editing interface.
Optionally, the step of S22 includes at least one of:
responding to an operation of selecting a part of content of a corresponding object, and adding at least one object in the preset form according to the selected content;
responding to the operation of splitting the content of the corresponding object, and adding at least one object in the preset form according to the split content;
responding to an operation of combining at least part of contents of the first object and the second object, and adding at least one object in the preset form according to the combined contents;
in response to an operation of deleting a part of the content of the corresponding object, updating the corresponding object according to the deleted content;
and responding to the operation of adding the object, and adding at least one object in the preset form.
Optionally, the adding at least one object in the preset form in response to the operation of adding the object includes:
in response to an operation of adding an object, displaying at least one associated object associated with the first object and/or the second object;
and responding to the operation of the associated object, and adding at least one object in the preset form.
Optionally, the second preset condition is met, and the second preset condition includes at least one of the following:
dragging the first object and/or the second object to a second preset area;
pressing the first object and/or the second object for a long time or pressing again;
dragging the first object and the second object to overlap.
Optionally, the step S30 includes:
responding to that a second operation on at least one object in the preset form meets a second preset condition, and outputting a first preset interface, wherein optionally, the first preset interface comprises an identifier of at least one second application or service;
and in response to the fact that the operation of the identifier of the second application or service meets a fourth preset condition, performing preset processing through at least one second application or service.
The present application also provides an operating method comprising the steps of:
s100: responding to a first operation on at least one first object, and displaying a source interface of the first object and at least one first preset interface corresponding to at least one second application or service;
s300: and responding to the second operation meeting the first preset condition, and executing preset processing through the second application or service.
Optionally, the first preset interface includes the first object.
Optionally, the step S100 includes:
responding to a first operation on at least one first object, and outputting a second preset interface, wherein optionally, the second preset interface comprises an identifier of at least one second application or service;
and in response to that the third operation on the identification of the second application or service meets a second preset condition, displaying a source interface of the first object and at least one first preset interface corresponding to at least one second application or service.
Optionally, the first preset interface is at least one target interface of the second application or service, which is matched with the first object.
Optionally, the determining of the target interface of the second application or service matching the first object includes at least one of:
determining a target interface of the second application or service matching the first object according to the usage data;
determining a target interface of the second application or service matched with the first object according to the function or type of the first object;
and determining a target interface of the second application or service matched with the first object according to the operation scene of the first object.
Optionally, the method further comprises at least one of:
in response to a fourth operation on a second object in the source interface, adding the second object to the first preset interface;
in response to a fifth operation on the first object in the first preset interface, removing the first object from the first preset interface;
responding to the closing of the first preset interface, and amplifying and displaying the source interface;
and responding to the closing of the source interface, and outputting an original interface of the second application or service, which corresponds to the first preset interface.
Optionally, the original interface includes an object in the first preset interface.
The application further provides an intelligent terminal, the intelligent terminal includes: the device comprises a memory and a processor, wherein the memory is stored with an operating program, and the operating program realizes the steps of the operating method according to any one of the above items when being executed by the processor.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the operating method of any one of the above.
As described above, the present application provides an operating method, an intelligent terminal, and a storage medium, the method including the steps of: responding to a first operation on at least one object to be processed, and outputting the object to be processed on a first preset interface; and responding to the second operation, and performing first preset treatment on the object to be treated through at least one second application or service. According to the technical scheme, the preset interface is provided for displaying the object to be processed, data processing is further performed through operation on the preset interface, misoperation can be effectively reduced, and user experience is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic diagram of a hardware structure of an intelligent terminal for implementing various embodiments of the present application.
Fig. 2 is a communication network system architecture diagram according to an embodiment of the present application.
Fig. 3 is a flow chart diagram illustrating a method of operation according to the first embodiment.
Fig. 4 is an interface diagram illustrating the operation method according to the first embodiment.
Fig. 5 is a flow chart diagram illustrating a method of operation according to a second embodiment.
Fig. 6 is an interface diagram illustrating an operation method according to the second embodiment.
Fig. 7 is a flow chart diagram illustrating a method of operation according to a third embodiment.
Fig. 8 is an interface diagram illustrating an operation method according to the third embodiment.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings. With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if," as used herein, may be interpreted as "at … …" or "when … …" or "in response to a determination," depending on the context. Also, as used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or," "and/or," "including at least one of the following," and the like, as used herein, are to be construed as inclusive or mean any one or any combination. For example, "includes at least one of: A. b, C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C ", again for example," A, B or C "or" A, B and/or C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C'. An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless otherwise indicated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
The words "if", as used herein, may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It should be noted that step numbers such as S1 and S2 are used herein for the purpose of more clearly and briefly describing the corresponding content, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S2 first and then S1 in specific implementation, which should be within the scope of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
The smart terminal may be implemented in various forms. For example, the smart terminal described in the present application may include smart terminals such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and fixed terminals such as a Digital TV, a desktop computer, and the like.
While the following description will be given by way of example of a smart terminal, those skilled in the art will appreciate that the configuration according to the embodiments of the present application can be applied to a fixed type terminal in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of an intelligent terminal for implementing various embodiments of the present application, the intelligent terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the intelligent terminal architecture shown in fig. 1 does not constitute a limitation of the intelligent terminal, and that the intelligent terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
The following specifically describes each component of the intelligent terminal with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000 (Code Division Multiple Access 2000 ), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex-Long Term Evolution), TDD-LTE (Time Division duplex-Long Term Evolution, Time Division Long Term Evolution), 5G, and so on.
WiFi belongs to short-distance wireless transmission technology, and the intelligent terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the smart terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the smart terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the smart terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The smart terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Optionally, the light sensor includes an ambient light sensor and a proximity sensor, the ambient light sensor may adjust the brightness of the display panel 1061 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1061 and/or the backlight when the smart terminal 100 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the intelligent terminal. Alternatively, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, can collect touch operations of a user (e.g., operations of a user on the touch panel 1071 or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory) thereon or nearby and drive the corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Optionally, the touch detection device detects a touch direction of a user, detects a signal caused by a touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Optionally, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited thereto.
Alternatively, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 1, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the intelligent terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the intelligent terminal, which is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the smart terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the smart terminal 100 or may be used to transmit data between the smart terminal 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a program storage area and a data storage area, and optionally, the program storage area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the intelligent terminal, connects various parts of the entire intelligent terminal using various interfaces and lines, performs various functions of the intelligent terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby integrally monitoring the intelligent terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor, optionally, the application processor mainly handles operating systems, user interfaces, application programs, etc., and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The intelligent terminal 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and preferably, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
Although not shown in fig. 1, the smart terminal 100 may further include a bluetooth module, etc., which will not be described herein.
In order to facilitate understanding of the embodiments of the present application, a communication network system on which the intelligent terminal of the present application is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present disclosure, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes User Equipment (UE) 201, Evolved UMTS Terrestrial Radio Access Network (E-UTRAN) 202, Evolved Packet Core Network (EPC) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Alternatively, the user equipment 201 may be the terminal 100 described above, and details thereof are not repeated here.
The evolved UMTS terrestrial radio access network 202 includes eNodeB2021 and other enodebs 2022, among others. Alternatively, the eNodeB2021 may be connected with other eNodeB2022 through a backhaul (e.g., X2 interface), the eNodeB2021 is connected to the evolved packet core network 203, and the eNodeB2021 may provide the user equipment 201 access to the evolved packet core network 203.
The evolved packet core network 203 may include Mobility Management Entities (MMEs) 2031, Home Subscriber Servers (HSS) 2032, other Mobility Management entities 2033, Serving Gateways (SGW) 2034, packet data network gateways (PDN gateways, PGW) 2035, Policy and Charging Rules Functions (PCRF) 2036, and so on. Optionally, the mobility management entity 2031 is a control node that handles signaling between the user equipment 201 and the evolved packet core network 203, providing bearer and connection management. The home subscriber server 2032 is used to provide registers to manage functions such as a home location register (not shown) and holds some user specific information about service characteristics, data rates, etc. All user data may be sent through the serving gateway 2034, the packet data network gateway 2035 may provide IP address assignment and other functions for the user equipment 201, and the policy and charging function 2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources that selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present application is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems (e.g. 5G), and the like.
Based on the above intelligent terminal hardware structure and communication network system, various embodiments of the present application are provided.
First embodiment
Fig. 3 is a flow chart diagram illustrating a method of operation according to the first embodiment. As shown in fig. 3, an operating method of the present application includes the steps of:
s1: responding to a first operation on at least one object to be processed, and outputting the object to be processed on a first preset interface;
s3: and responding to the second operation, and performing first preset treatment on the object to be treated through at least one second application or service.
Through the mode, when the user needs to perform data processing, the preset interface is provided to display the object to be processed, the data processing is further performed through the operation of the preset interface, misoperation can be effectively reduced, and user experience is improved.
Optionally, the object to be processed comprises at least one of: pictures, text, video, music, hardware functions, software functions, screenshot interfaces of the current application or service, system screenshot interfaces, system modes, configuration information. Alternatively, the picture, text, video, music may be the file itself, or may be a link for opening the corresponding file. Optionally, the hardware function refers to a function of hardware that can be used by an application or a service, and includes but is not limited to at least one of functions of hardware configured by an intelligent terminal such as a camera, a microphone, a flash lamp, a gyroscope, and the like, in one scenario, an application a may open the camera to take a picture, and a user may share a corresponding operation button on the application a to an application B that cannot open the camera, so that the application B may also open the camera to take a picture, for example, a mailbox application cannot open the camera, and when a picture needs to be sent through a mailbox, the user may share the corresponding button in the social application or the picture taking application to the mailbox application, so that the mailbox application may open the camera to take a picture, edit and send the picture directly, thereby reducing the number of times of back and forth operations between applications, and being more convenient. Optionally, the software function refers to a function of software that can be used by an application or a service, and includes, but is not limited to, at least one of a control function of the smart terminal on the smart home device and an operation interface of the application or the service. Optionally, the screenshot interface of the current application or service refers to an interface of a screenshot function that the current application or service has. Optionally, the system screenshot interface refers to an interface of a screenshot function at a system level. Optionally, the system mode includes, but is not limited to, at least one of a screen-shot mode, a do-not-disturb mode, and a landscape mode. Optionally, the configuration information includes, but is not limited to, at least one of an information alert tone, a font size, a background configuration, a notification configuration, and preference data, in one scenario, the user configures a favorite chat background in the application a, and when the same background configuration is required to be performed in the application B, the background configuration information of the application a may be shared with the application B, and then the application B may rapidly configure the favorite chat background of the user, and the user is not required to perform configuration operation repeatedly, which is more convenient and intelligent.
Optionally, the first object is from at least one application or service different from the second application or service. Optionally, the first object is from a first application or service or an application associated with the first application or service, the associated applications may be the same type of application, e.g. both social applications, the associated applications may also be applications associated with the same location, e.g. a company, an airport, etc. Therefore, cross-application sharing among applications or services can be achieved, the types of sharable data are rich, the sharing process is more visual, convenient, quick and intelligent, the sharing scene is more diversified, the boundary sense among the applications or services is reduced, and the user experience is better.
Optionally, in response to a first operation on at least one object to be processed, the object to be processed is output on a first preset interface, and in the first preset interface, a user can conveniently check and confirm the selected object, for example, whether selection is missed or multiple or wrong is confirmed, and then, in response to a second operation, the object to be processed can be processed through at least one second application or service, so that misoperation can be reduced in the processing process, convenience is achieved, and operation accuracy and success rate are improved.
Optionally, the first operation and/or the second operation comprises at least one of: long press, re-press, move, long press and move, re-press and move, long press or re-press for a preset duration, copy, long press and flick, re-press and flick. Optionally, the first operation and/or the second operation further includes at least one of the following security verification information: a movement trajectory, biometric information. Optionally, the movement track may be a preset movement track such as a circle, and when the movement track of the first operation is consistent with the preset movement track, the security verification passes, and the object to be processed is allowed to be processed. Optionally, the biometric information may be fingerprint information, and when the user presses the screen for a long time or presses the screen again, the fingerprint information is collected by a fingerprint collection module arranged on the screen to perform security verification, and after the security verification is passed, the object to be processed is allowed to be processed.
Optionally, to improve the convenience and accuracy of the operation, the step S1 includes:
s11: responding to a first operation on at least one object to be processed, and acquiring object information of the object to be processed;
s12: and outputting the object to be processed on a first preset interface according to the object information.
Optionally, the object information of the object to be processed includes at least one of: function, source, operational scenario. Alternatively, the functions include software functions such as editing pictures, screen-capturing, muting, and the like, and hardware functions such as photographing functions, flash functions, and the like. Optionally, the source of the object to be processed includes at least one of an interface, an application, a contact, local or network data, and an associated device. Optionally, the operational scenario includes an operational location and/or time.
Optionally, the associated device may be a device (for example, a network device such as a smart wearable device, a car networking device, a smart home device, or a server, etc.) directly or indirectly connected to the smart terminal applying the operation method, or may be a device that has some association with the smart terminal applying the operation method (for example, a user is the same user, a login account is the same user, etc.). In one scenario, the associated device is a smart wearable device bound to a smart terminal using the operation method, and the object to be processed may be a software function, a hardware function, or data and/or configuration information on the smart wearable device.
Optionally, step S12, including at least one of:
outputting the object to be processed on a first preset interface according to the output form corresponding to the object information of the object to be processed;
when at least two objects to be processed are available, outputting the associated objects to be processed in the same area in the first preset interface according to the object information;
and when at least two objects to be processed are available, outputting the objects to be processed in a first preset interface according to a preset sequence according to the matching result of the object information and the use data.
Optionally, the object to be processed is output on the first preset interface according to the output form corresponding to the object information of the object to be processed, different object information may correspond to the same or different output forms, for example, the output form of the image or the video may be a web link or a thumbnail, the output form of the file may be an identifier including file information, the output form of the software function may be a control identifier, the output form of the hardware function may be a graphic corresponding to hardware, and the output form of the object from the associated device may be a combination of the output form of the corresponding device and the output form of the object, so that the user may confirm the object to be processed more intuitively.
Optionally, when there are at least two objects to be processed, the associated objects to be processed are output in the same area in the first preset interface according to the object information, for example, objects with the same function are output in the same area, objects from the same source are output in the same area, objects from the same application or service are output in the same area, and objects from the same device are output in the same area, so that a user can more conveniently confirm the objects to be processed.
Optionally, when there are at least two objects to be processed, the objects to be processed are output in a first preset interface according to a preset sequence according to a matching result of the object information and the usage data, for example, the usage data indicates that the types of the objects frequently shared by the user mainly include files, pictures and webpages, where the sharing frequency is from high to low, and the objects are arranged in sequence according to the frequency. Alternatively, the usage data may correspond to a scene, for example, different locations and/or times correspond to different usage data, and the types of objects frequently shared by users are mainly files, pictures, and web pages, where the sharing frequency of a company is from high to low, and the sharing frequency of a home is from high to low, and is files, web pages, and web pages. Therefore, the user can more conveniently and quickly confirm the object to be processed.
Optionally, in order to make the display content of each display area clearer and facilitate viewing, the method outputs the associated object to be processed in the same area in the first preset interface according to the object information, and includes at least one of the following:
tiling and displaying the objects to be processed in each area;
adjusting the display ratio of each area according to the number of the objects to be processed in each area;
adjusting the display proportion of each area according to the type of the object to be processed in each area;
the different regions are displayed side by side or at least partially stacked.
Optionally, the display ratio of each area is adjusted according to the number of the objects to be processed in each area, for example, the display ratio of the area with more objects to be processed is larger than that of the area with less objects to be processed, so that the objects in each area can be tiled as much as possible by adjusting the display ratio of each area.
Optionally, according to the type of the object to be processed in each region, the display ratio of each region is adjusted, for example, a picture and a video occupy a larger area, while a file, a link, a function option, and the like occupy a smaller area, and the display ratio of the region including the picture and the video may be adjusted to be larger than the display ratio of the region including the file, the link, and the function option.
Optionally, the step of S3, including:
s31: responding to the second operation, outputting a second preset interface, wherein optionally, the second preset interface comprises at least one identifier of a second application or service;
s32: and in response to the third operation on the identifier of the second application or service meeting the first preset condition, performing first preset processing through the second application or service.
Optionally, in the step S31, outputting a second preset interface, including:
determining at least one second application or service;
and displaying the identifier of at least one second application or service on a second preset interface.
Optionally, determining at least one second application or service includes at least one of:
determining at least one second application or service according to the sharing record of the object to be processed;
determining at least one second application or service according to the current background application;
determining at least one second application or service according to the usage data;
determining at least one second application or service according to the content keywords of the object to be processed;
determining at least one second application or service according to the content type of the object to be processed;
and determining at least one second application or service according to the operation of adding the application or service.
Optionally, at least one second application or service is determined according to the sharing record of the object to be processed, for example, if the sharing record indicates that the user frequently shares the object to be processed to a specific type of application or service or a fixed application or service, the corresponding second application or service may be determined according to the sharing record.
Optionally, when the at least one second application or service is determined according to the current background application, where the current background application is an application currently switched to be run in the background, and is also an application frequently used by the user, and the accuracy is good.
Optionally, at least one second application or service is determined according to the usage data, for example, if the application or service that the user is accustomed to sharing is a social application, a memo or a picture editing application, and the like, the social application, the memo or the picture editing application may be recommended as the second application or service.
Optionally, at least one second application or service is determined according to the content keywords of the object to be processed, for example, if the object to be processed is a text containing a "work plan", a related application or service such as a memo, a social application, or a mailbox application may be recommended as the second application or service.
Optionally, at least one second application or service is determined according to the content type of the object to be processed, for example, when the content type of the object to be processed is a picture, a text, a video, or music, a social application or an editor that can process corresponding content may be recommended, or when the content type of the object to be processed is a hardware function, a software function, a screenshot interface of a current application or service, or a system screenshot interface, an application or service without a corresponding function and/or interface may be recommended, or when the content type of the object to be processed is a system mode or configuration information, an application or service that applies a corresponding mode and/or configuration may be recommended.
Optionally, at least one second application or service is determined according to the operation of adding the application or service, for example, when the at least one second application or service displayed on the first preset interface does not have an application or service that needs to be shared, the user may select the application or service to add to the first preset interface for display.
Optionally, the user may set the frequently used application as a fixed application or service, and the set application or service is displayed each time the first preset interface is displayed.
Optionally, when the current object to be processed does not correspond to the sharing record and/or does not use the data currently, the at least one second application or service may be determined according to at least one of the current background application, the content keyword of the object to be processed, and the content type of the object to be processed.
By the mode, the second application or service can be recommended more intelligently and accurately, user operation is reduced, and convenience are realized.
Optionally, the identifier of the second application or service is at least one of a name, an icon, and an interface thumbnail. Optionally, the second application or service in the second preset interface may be displayed in a single manner or in a combined manner, for example, an application or service frequently shared by the user may be displayed using an interface thumbnail, and a thumbnail of at least one common interface of the application or service may be displayed, for example, a chat interface of different contacts in the wechat application, and an application or service not frequently used by the user may be displayed using an icon or a name.
Optionally, the second application or service is from the current device and/or an associated device of the current device. Optionally, the associated device is a device that has an association or binding relationship with the current device, and may be, for example, other devices used by the user, or devices of family and friends of the user, so that data and functions can be shared among different devices, and the use is convenient. Under a scene, a user can share the projection function of the game application of the equipment of the user with the game application on the equipment of the friend, so that the game pictures of the two pieces of equipment can be projected into the same screen for interaction, and the interestingness is increased. Or, in another scenario, the user can share the configuration information of the incoming call do-not-disturb mode on the mobile phone with the notification application of the smart watch and other devices, and the incoming call do-not-disturb mode is set for at least two devices of the user synchronously, so that the mobile phone is very convenient and fast.
Optionally, a first preset condition is met, including at least one of:
the identifier corresponding to the object to be processed is in contact with or intersected with the identifier of the second application or service;
dragging the identifier corresponding to the object to be processed to the identifier of the second application or service;
dragging the identifier of the second application or service to the identifier corresponding to the object to be processed;
the identifier corresponding to the object to be processed and the identifier of the second application or service are dragged to at least one preset area.
Optionally, a first preset process is performed, which includes at least one of:
outputting the content and/or attribute of the object to be processed;
opening the content and/or attribute of the object to be processed through a second application or service;
and copying, applying, transferring or sharing the object to be processed to a second application or service in whole or in part.
Optionally, when the operation on the identifier of the second application or service meets the preset condition, only the content and/or the attribute of the object to be processed may be output, at this time, the user may check the content and/or the attribute of the object to be processed, determine whether to continue to share, and if so, perform a confirmation operation to complete the sharing.
Optionally, when the operation on the identifier of the second application or service meets the preset condition, the content and/or the attribute of the object to be processed may be opened through the second application or service, and when the content and/or the attribute of the object to be processed is opened through the second application or service, the interface of the second application or service may be entered for display, or a floating window may be used for display without directly entering the interface of the second application or service.
Optionally, when the operation on the identifier of the second application or service meets a preset condition, the object to be processed may also be completely or partially copied, applied, transferred or shared to the second application or service, for example, applying a ringtone of WeChat to the QQ, copying pictures, texts, etc. on a webpage to a memo, transferring files in a current folder to a network hard disk for saving, transferring a photographing function from the A application to the B application to restrict the A application from using a camera, and publishing pictures, music, videos, etc. to a social application such as a microblog, WeChat, etc. Therefore, the user can complete the sharing operation at any time and any place quickly, for example, information memorandum can be performed quickly when a webpage is browsed, a memorandum does not need to be inserted after storage, and the sharing operation is very convenient and fast.
Optionally, copying, applying, transferring, or sharing all or part of the object to be processed to a second application or service, including:
copying, applying, transferring or sharing all or part of the object to be processed to the second application or service according to at least one of the use data, the configuration information of the second application or service, the configuration information of the object to be processed, the association information between the second applications or services, the preset copying times and the preset transferring times.
Alternatively, the usage data, configuration information of the second application or service, configuration information of the object to be processed may be used to determine the number and/or form of copies, transfers or releases. Examples of the form of distribution include whether to edit before distribution, whether to disclose the content completely or partially when distributing the content, whether to insert the content directly when copying the content or to copy the content as a link or a thumbnail, whether to compress the object when copying or transferring the content, and an insertion position when copying the content. The associated information between the second applications or services is used to determine all applications or services to which the object to be processed is to be copied, transferred or published, and in general, the associated second applications or services are simultaneously displayed in the second preset interface, and the associated information includes at least one of application type association and device type association, for example, a user selects to copy a picture into a memo of the current device, and may also prompt whether to synchronously copy into the reminder item of the current device, or the user copies a piece of voice into the reminder item of the current device, and may prompt whether to synchronously copy into the reminder item of another device of the user. Therefore, the user can copy, transfer or release the object to be processed more conveniently, and the process is more intelligent.
Optionally, to make the sharing form of the objects to be processed more flexible, after the step S1 or before the step S3, the method further includes:
s2: and responding to the fourth operation, and performing second preset treatment on the object to be treated so as to update the object to be treated.
Optionally, the second preset processing includes at least one of splitting object content, selecting object content, combining object content, copying an object, deleting an object, and adding an object.
Optionally, the step of S2, including at least one of:
in response to the operation of selecting at least one part of contents of the object to be processed, saving the selected contents as the object to be processed on a first preset interface;
responding to the operation of splitting the content of the object to be processed, and saving the split content as the object to be processed on a first preset interface;
in response to the operation of combining at least part of contents of at least two objects to be processed, saving the combined contents as the objects to be processed on a first preset interface;
responding to the operation of copying at least one part of content of the object to be processed, and saving the copied content as the object to be processed on a first preset interface;
in response to the operation of deleting at least part of the content of the object to be processed, saving the object to be processed after the content is deleted as the object to be processed on a first preset interface;
and responding to the operation of adding the object to be processed, and adding the object to be processed on a first preset interface.
Optionally, the content of one audio segment may be partially segmented or split into multiple audio segments to obtain a new object to be processed, or two images may be spliced into one image and at least two functions may be combined into an operation list, or one file may be copied into two files, or a part of a selected segment of text may be deleted, or an object may be continuously added to the first preset interface.
Optionally, in response to the operation of adding the object to be processed, adding the object to be processed on the first preset interface includes:
responding to the operation of adding the object to be processed, and displaying an interface associated with a source interface of the existing object to be processed and/or at least one associated object associated with the existing object to be processed;
and responding to the condition that the operation on the source interface and/or the associated object meets the second preset condition, and adding the object to be processed corresponding to the operation on the first preset interface.
Optionally, the operation of adding the object to be processed may be triggered by clicking a specific button on the first preset interface, or by long-pressing or re-pressing any region of the first preset interface. Optionally, the interface associated with the source interface of the existing object to be processed may be interface type association or interface source association, for example, when the source interface is a workgroup chat interface, the associated interface may be another workgroup chat interface, and the source interface is a setting interface of an application a, the associated interface may be a setting interface of a similar application B, so that a user may conveniently find a required object in other interfaces, and processing efficiency is improved.
Optionally, displaying at least one associated object associated with the existing object to be processed, including at least one of:
displaying at least one associated object according to the information of the object to be processed;
displaying at least one associated object according to the usage data;
if the information of the object to be processed is matched with the using data, displaying at least one associated object according to the using data;
and if the information of the object to be processed is not matched with the use data, displaying at least one associated object according to the information of the object to be processed.
Optionally, the at least one related object is displayed according to the information of the object to be processed, and the at least one related object includes at least one of the following objects:
displaying at least one associated object with a function mutually supporting and/or at least partially identical with the function of the object to be processed;
displaying at least one associated object of which the type of the source is the same as the source of the object to be processed;
and displaying at least one associated object of which the associated time and/or place is matched with the operation scene of the object to be processed.
Optionally, the display function and the function of the object to be processed support each other and/or are at least partially the same as each other, and the function includes a software function and/or a hardware function, for example, the user selects an image enlarging function, an image reducing function, a picture cropping function may be output as an associated object, the user selects a photographing function, and a flash function may be recommended as an associated object.
Optionally, at least one associated object having the same type of source as the source of the object to be processed is displayed. Optionally, the source of the object to be processed includes at least one of an interface, an application, and a contact. For example, the object to be processed is a file from WeChat, a file from QQ or nailing can be output as the associated object, the object to be processed is a background from the system setting interface, a background from the WeChat setting interface can be output as the associated object, the object to be processed is a file link sent by a colleague A, and a file link sent by a colleague B can be output as the associated object.
Optionally, at least one associated object whose associated time and/or location matches the operation scenario of the object to be processed is displayed. Optionally, the operation scenario includes an operation time and/or a place. The object associated with the corresponding time can be selected according to the operation time of the object to be processed, for example, a user selects to send a file in the meeting time, and other files received in the meeting time can be output as the associated object. Alternatively, an object associated with a corresponding place may also be selected according to an operation place of the object to be processed, for example, an image that a user selects himself or herself at a park, and an image related to a landscape of the park on a network may be output as the associated object.
Optionally, the at least one associated object is displayed according to the usage data, including at least one of:
if the frequency of using the associated object is greater than or equal to a preset frequency threshold value, displaying at least one associated object;
displaying at least one related object having at least a part of the same function as the related object in the function and use data;
displaying at least one associated object of which the type of the source is the same as the source of the used associated object in the use data;
and displaying the associated time and/or place and at least one associated object matched with the scene of the associated object in the use data.
Optionally, if the frequency of using the associated object is greater than or equal to the preset frequency threshold, displaying at least one associated object. When the frequency of using the associated object in the usage data is greater than or equal to the preset frequency threshold, it indicates that the user prefers to use the associated object, and at this time, the associated object may be output according to the information of the object to be processed and/or the usage data.
Optionally, at least one associated object having at least a part of the same function as the associated object in the usage data is displayed. For example, since the function of the related object that the user is accustomed to using is a map function, which indicates that the user is willing to share information of the location or the surrounding area of the user at ordinary times, the current location, the nearby food, the nearby shopping mall, and the like can be output as the related object.
Optionally, at least one associated object of the same type of source as the source of the associated object used in the usage data is displayed. For example, a user often shares a hot article from a certain technical website, and therefore when the user operates a to-be-processed object, the user has a high probability of selecting to share the hot article, and the hot article from the technical website or a related type of technical website can be output as an associated object. Optionally, at least one associated object of the associated time and/or location and usage data, which is matched with the scene of the usage associated object, is output. Optionally, the scene using the associated object includes a time and/or a place of using the associated object. For example, when a user is at home, the user often shares movies or music with friends, when the user operates the object to be processed, the user has a high probability of selecting to share movies or music, and high-score movies, popular movies, and music lists can be output as associated objects.
Optionally, when the associated object is displayed according to the information of the object to be processed and the usage data, the object determined according to the information of the object to be processed may be filtered according to the usage data, and the associated object is output. For example, the article a from the website a, the article B from the website B, and the article C from the website C are determined according to the information of the object to be processed, but the usage data indicates that the user does not like to share the article from the website C, the article a from the website a and the article B from the website B are output as the associated object.
Optionally, when the associated object is displayed according to the information of the object to be processed and the usage data, if the information of the object to be processed is matched with the usage data, at least one associated object is output according to the usage data, so that the output associated object can better meet the user requirement; and/or if the information of the object to be processed is not matched with the use data, outputting at least one associated object according to the information of the object to be processed, so that the output associated object is more accurate.
Optionally, a second preset condition is met, including at least one of:
dragging the object to a first preset interface;
dragging the object to a preset area different from the source interface and the first preset interface;
selecting an object and swinging towards a first preset interface;
the trajectory surrounds or covers at least one object.
Optionally, after the source interface and/or the associated object are displayed, the object is dragged to the first preset interface, or the object is dragged to a preset area different from the source interface and the first preset interface, or the object is selected and swung towards the first preset interface, or a sliding operation is used and an operation track surrounds or covers at least one object, so that the object to be processed corresponding to the operation can be added to the first preset interface, the operation is convenient and fast, and the misoperation can be reduced by providing the first preset interface for displaying the object to be processed.
As shown in fig. 4, for an operation interface of this embodiment, a picture 11 and a picture 12 are displayed in a web page 10, when a user needs to share the picture 11 and the picture 12 with another application, the user can respectively long-press the picture 11 and the picture 12, select the picture 11 and the picture 12 as an object to be processed, and at this time, output the picture 11 and the picture 12 to a first preset interface 20 for display, so that the user can conveniently confirm whether to select the right picture. After that, the user may perform subsequent operations on the pictures 11 and/or 12 in the first preset interface 20, or continue to drag a new object to be processed to the first preset interface 20. Then, the user may perform long-time pressing on the picture 11 and/or the picture 12 in the first preset interface 20 to further output at least an identifier of an application to be selected, such as a WeChat identifier, a nailing identifier, a microblog identifier, and the like, at this time, the user may drag the picture 11 and/or the picture 12 to the corresponding application identifier to release the same, so that the picture 11 and/or the picture 12 may be sent or published to the corresponding application, and the whole process is more convenient and is less prone to misoperation.
The operation method of the application responds to a first operation on at least one object to be processed, and outputs the object to be processed on a first preset interface; and responding to the second operation, and performing first preset treatment on the object to be treated through at least one second application or service. According to the technical scheme, the preset interface is provided for displaying the object to be processed, data processing is further performed through operation on the preset interface, misoperation can be effectively reduced, and user experience is improved.
Second embodiment
Fig. 5 is a flow chart diagram illustrating a method of operation according to a second embodiment. As shown in fig. 5, the operation method of the present application includes the following steps:
s10: responding to that a first operation on at least one first object and at least one second object accords with a first preset condition, and outputting the first object and/or the second object in a preset form;
s30: and responding to the second operation of the at least one object in the preset form meeting a second preset condition, and performing preset processing through at least one second application or service.
Through the mode, at least two objects to be processed can be displayed and operated in a specified form, for example, the at least two objects continue to be displayed in a suspended mode after being loosened, so that a user can conveniently check and confirm the selected objects, for example, whether the objects are selected in a missing mode, in a multi-selection mode or in a wrong mode is confirmed, and then the objects are further operated and processed, so that misoperation is reduced, and operation accuracy and success rate are improved.
Optionally, the first object and/or the second object comprises at least one of: pictures, text, video, music, hardware functions, software functions, screenshot interfaces of the current application or service, system screenshot interfaces, system modes, configuration information. Optionally, the pictures, texts, videos and music can be files themselves or links for opening corresponding files; the hardware function refers to a function of hardware that can be used by an application or service, and includes but is not limited to at least one of functions of hardware configured by an intelligent terminal such as a camera, a microphone, a flash lamp, a gyroscope and the like; the software function refers to a function of software which can be used by the application or the service, and includes but is not limited to at least one of a control function of the intelligent terminal on the intelligent household equipment and an operation interface of the application or the service; the screenshot interface of the current application or service refers to an interface with a screenshot function of the current application or service; the system screenshot interface refers to an interface of a screenshot function at a system level; the system mode includes but is not limited to at least one of a screen projection mode, a no-disturbance mode and a horizontal screen mode; the configuration information includes, but is not limited to, at least one of an information alert tone, font size, background configuration, notification configuration, preference data.
Optionally, meeting the first preset condition includes at least one of: and long pressing or re-pressing for a preset time length, and long pressing or re-pressing and moving for a preset distance.
Optionally, the preset modality is different from the original state of the first object and the second object. Optionally, the preset configuration comprises at least one of: mark suspension and content suspension. For example, the identification of the file may be a pattern of files, and the identification of the function may be a pattern of buttons. Alternatively, the content may be at least one of partial content, entire content, a content summary, and a thumbnail of the object.
Optionally, when the first object and/or the second object are output in a preset form, the first object and the second object move to a first preset area and are displayed or are displayed above the original position. In one scenario, when the object is pressed for a long time or pressed again for a preset duration, the object moves to a first preset area for display, for example, the top or the bottom of the mobile interface is displayed, and when the object is pressed for a long time or pressed again and moves for a preset distance, the object is displayed above the original position, where the object is suspended above the original position or a preset distance above the original position. Optionally, objects in the preset form may be displayed in a stacked manner to support selection of more objects, and the stacked manner may be automatic stacking or stacking by an operation of dragging the objects. Therefore, even if the user carelessly loosens the object when dragging the object, the object can be kept in the preset form display, namely, the selected state is kept, at least two objects can be conveniently selected, the selected objects can be conveniently confirmed and operated, and misoperation is reduced.
Optionally, the step of S30, including:
responding to a second operation of at least one object in a preset form, wherein the second operation accords with a second preset condition, and outputting a first preset interface, wherein optionally the first preset interface comprises an identifier of at least one second application or service;
and in response to the operation on the identifier of the second application or service meeting a fourth preset condition, performing preset processing through at least one second application or service.
Optionally, a second preset condition is met, including at least one of:
dragging the first object and/or the second object to a second preset area;
pressing the first object and/or the second object for a long time or pressing again;
the first object and the second object are dragged to overlap.
Optionally, when the object maintains the preset form, the flow of the preset processing through the at least one second application or service may be triggered by dragging the first object and/or the second object to the second preset area, pressing the first object and/or the second object for a long time or pressing the first object and/or the second object again, or dragging the first object and the second object to overlap.
Optionally, the performing, by at least one second application or service, a preset process includes:
determining a second application or service;
and displaying the identifier of the second application or service on the first preset interface.
Optionally, determining the second application or service comprises at least one of:
determining at least one second application or service according to the sharing record of the first object and/or the second object;
determining at least one second application or service according to the current background application;
determining at least one second application or service according to the usage data;
determining at least one second application or service according to the content keywords of the first object and/or the second object;
determining at least one second application or service according to the content type of the first object and/or the second object;
and determining at least one second application or service according to the operation of adding the application or service.
Optionally, the second application or service is from the current device and/or an associated device of the current device.
Optionally, the identifier of the second application or service is at least one of a name, an icon, and an interface thumbnail.
Optionally, a fourth preset condition is met, including at least one of:
the identifiers corresponding to the first object and the second object are in contact with or intersect with the identifier of the second application or service;
dragging the identifiers corresponding to the first object and the second object to the identifier of the second application or service;
the identifiers corresponding to the first object and the second object and the identifier of the second application or service are dragged to at least one preset area.
Optionally, through the above operation, the preset processing is triggered to be executed, where the preset processing includes at least one of:
outputting the content and/or attributes of the first object and the second object;
opening the content and/or attributes of the first object and the second object through a second application or service;
the first object and the second object are copied, applied, transferred or shared, in whole or in part, to a second application or service.
Copying, applying, transferring or sharing a first object, in whole or in part, to a second application or service, comprising:
and copying, applying, transferring or sharing the first object to the second application or service wholly or partially according to at least one of the use data, the configuration information of the second application or service, the configuration information of the first object, the association information between the second application or service, the preset copying times and the preset transferring times.
The flow of the preset processing performed by at least one second application or service is the same as the related steps in the first embodiment, and is not described herein again.
Optionally, to make the sharing form of the object more flexible, after the step S10 or before the step S30, the method further includes:
s21: responding to that third operation on the first object and/or the second object accords with a third preset condition, and outputting an editing interface of the corresponding object;
s22: and updating, deleting and/or adding at least one object in a preset form in response to the operation on the editing interface.
Optionally, the third preset condition is met, and the first object and/or the second object is/are dragged towards the preset direction. Optionally, the editing interface may be displayed beside or otherwise occlude the first object and/or the second object. Optionally, at least two objects may be edited in the editing interface.
Optionally, the step of S22, including at least one of:
responding to an operation of selecting a part of contents of a corresponding object, and adding at least one object in a preset form according to the selected contents;
responding to the operation of splitting the content of the corresponding object, and adding at least one object in a preset form according to the split content;
responding to an operation of combining at least one part of contents of the first object and the second object, and adding at least one object in a preset form according to the combined contents;
in response to an operation of deleting a part of the content of the corresponding object, updating the corresponding object according to the deleted content;
and in response to the operation of adding the object, adding at least one object in a preset form.
Optionally, a content of a segment of audio may be partially segmented or split into multiple segments of audio to obtain a new object to be processed, or two pictures may be spliced into one picture and at least two functions may be combined into an operation list, or one file may be copied into two files, or a part of a selected segment of text may be deleted, or an object to be processed may be continuously added, and an updated and/or added object is displayed in a preset form.
Optionally, in response to the operation of adding the object, adding at least one object in a preset form, including:
in response to the operation of adding the object, displaying at least one associated object associated with the first object and/or the second object;
and responding to the operation of the associated object, and adding at least one object in a preset form.
Optionally, displaying at least one associated object associated with the first object and/or the second object, including at least one of:
displaying at least one associated object according to the information of the first object and/or the second object;
displaying at least one associated object according to the usage data;
if the information of the first object and/or the second object is matched with the use data, displaying at least one associated object according to the use data;
and if the information of the first object and/or the second object does not match with the use data, displaying at least one associated object according to the information of the first object and/or the second object.
Optionally, the displaying at least one associated object according to the information of the first object and/or the second object includes at least one of the following:
displaying at least one associated object whose function mutually supports and/or is at least partially identical to the function of the first object and/or the second object;
displaying at least one associated object of which the source type is the same as the source of the first object and/or the second object;
and displaying at least one associated object of which the associated time and/or place is matched with the operation scene of the first object and/or the second object.
Optionally, the displaying of the at least one associated object according to the usage data includes at least one of:
if the frequency of using the associated object is greater than or equal to a preset frequency threshold value, displaying at least one associated object;
displaying at least one related object of which the function is the same as at least one part of the function of the related object in the use data;
displaying at least one associated object of which the type of the source is the same as the source of the used associated object in the use data;
and displaying the associated time and/or place and at least one associated object matched with the scene of the associated object in the use data.
Optionally, the operation on the associated object includes at least one of:
dragging the associated object to a preset area;
long press of the associated object;
the track encloses or covers at least one associated object.
Therefore, the operation is convenient and fast when the associated object is selected, and the selected associated object is displayed in a preset form, so that misoperation can be reduced.
As shown in fig. 6, for an operation interface of this embodiment, setting buttons of the chat background 11 and the incoming call ringtone 12 are displayed in the setting interface 10 of the WeChat, when a user needs to use the same chat background and incoming ringtone as the WeChat in other social applications, the user can simultaneously or sequentially long-press or drag the buttons of the chat background 11 and the incoming ringtone 12, select the chat background 11 as the first object 21, and select the incoming ringtone 12 as the second object 22, at this time, the first object 21 is displayed beside the setting button of the chat background 11 in a floating manner of thumbnails, and the second object 22 is displayed beside the setting button of the incoming ringtone 12 in a floating manner of contents, so as to maintain the selected state. The user may then proceed with editing the first object 21 and/or the second object 22, for example changing the size of the image, the duration of the ring tone, etc. in the editing interface. And then, dragging the first object 21 and the second object 22 to be overlapped or pressing any one of the first object 21 and the second object 22 for a long time, displaying a first preset interface 30, wherein at least two recommended applications are displayed in the first preset interface 30 in a name mode, the applications can be suitable for setting of a chat background and an incoming call ringtone, including QQ and nailing, and the recommended applications can be determined based on the type or content of the object selected by the user or at least one of the use data. Then, the user only needs to drag the first object 21 and/or the second object 22 to the name display position of the QQ or the nail, and the first object 21 and/or the second object 22 can be applied to the corresponding application, so that the whole process is convenient to operate, and misoperation is reduced.
The operation method of the application, in response to a first operation on at least one first object and at least one second object meeting a first preset condition, outputs the first object and/or the second object in a preset form; and responding to the second operation of the at least one object in the preset form meeting a second preset condition, and performing preset processing through at least one second application or service. Through the mode, at least two objects to be processed can be displayed and operated in a specified form, for example, the at least two objects continue to be displayed in a suspended mode after being loosened, so that a user can conveniently check and confirm the selected objects, for example, whether the objects are selected in a missing mode, in a multi-selection mode or in a wrong mode is confirmed, and then the objects are further operated and processed, so that misoperation is reduced, and operation accuracy and success rate are improved.
Third embodiment
Fig. 7 is a flow chart diagram illustrating a method of operation according to a third embodiment. As shown in fig. 7, the operation method of the present application includes the following steps:
s100: responding to a first operation of at least one first object, and displaying a source interface of the first object and at least one first preset interface corresponding to at least one second application or service, wherein optionally the first preset interface comprises the first object;
s300: and executing preset processing through a second application or service in response to the second operation meeting the first preset condition.
By the method, when the processing operation is carried out, the source interface of the object to be processed and the application interface for processing the object to be processed are displayed at the same time, so that a user can conveniently check and confirm the selected object, for example, whether the selection is missed, multiple or wrong is confirmed, the operation is convenient, and the misoperation is effectively reduced.
Optionally, the first object comprises at least one of: pictures, text, video, music, hardware functions, software functions, screenshot interfaces of the current application or service, system screenshot interfaces, system modes, configuration information. Optionally, the picture, text, video, music may be a file itself, or may be a link for opening a corresponding file; the hardware function refers to a function of hardware that can be used by an application or service, and includes but is not limited to at least one of functions of hardware configured by an intelligent terminal such as a camera, a microphone, a flash lamp, a gyroscope and the like; the software function refers to a function of software which can be used by the application or the service, and includes but is not limited to at least one of a control function of the intelligent terminal on the intelligent household equipment and an operation interface of the application or the service; the screenshot interface of the current application or service refers to an interface with a screenshot function of the current application or service; the system screenshot interface refers to an interface of a screenshot function at a system level; the system mode includes but is not limited to at least one of a screen projection mode, a do-not-disturb mode and a landscape mode; the configuration information includes, but is not limited to, at least one of an information alert tone, font size, background configuration, notification configuration, preference data.
Optionally, after responding to the first operation on at least one first object, the source interface and the first preset interface of the first object can be automatically and simultaneously displayed; or the second application or service is selected first and then automatically displayed at the same time; or the source interface of the first object is displayed and the first preset interface is added and displayed at the same time after the second application or service is selected.
Optionally, after selecting the second application or service, automatically and simultaneously acquiring the source interface and at least one first preset interface corresponding to the at least one second application or service, and the step S100 includes:
responding to a first operation on at least one first object, and outputting a second preset interface, wherein optionally, the second preset interface comprises an identifier of at least one second application or service;
and in response to that the third operation on the identifier of the second application or service meets a second preset condition, displaying a source interface of the first object and at least one first preset interface corresponding to at least one second application or service, wherein optionally, the first preset interface comprises the first object.
Optionally, in step S101, in response to the first operation on the at least one first object, a second application or service is determined first, and a second preset interface is output.
Optionally, the first operation and/or the third operation comprise at least one of: long press, re-press, move, long press and move, re-press and move, long press or re-press for a preset duration, copy, long press and flick, re-press and flick. Optionally, the first operation and/or the third operation further includes at least one of the following security verification information: a movement trajectory, biometric information. Optionally, the movement track may be a preset movement track such as a circle, and when the movement track of the operation is consistent with the preset movement track, the security verification passes, and the object to be processed is allowed to be processed. Optionally, the biometric information may be fingerprint information, and when the user presses a long time or presses a new time, the fingerprint information is collected by a fingerprint collecting module arranged on the screen for security verification, and after the security verification is passed, the first object is allowed to be processed.
Optionally, determining the second application or service comprises at least one of:
determining at least one second application or service according to the sharing record of the first object;
determining at least one second application or service according to the current background application;
determining at least one second application or service according to the usage data;
determining at least one second application or service according to the content keywords of the first object;
determining at least one second application or service according to the content type of the first object;
and determining at least one second application or service according to the operation of adding the application or service.
Optionally, the second application or service is from the current device and/or an associated device of the current device.
Optionally, the identifier of the second application or service is at least one of a name, an icon, and an interface thumbnail.
Optionally, the specific implementation of the step S101 is the same as the implementation process of the related step in the first embodiment, and is not described herein again.
Optionally, displaying a source interface of the first object and at least one first preset interface corresponding to at least one second application or service includes:
determining at least one target interface of a second application or service that matches the first object;
and outputting a first preset interface according to the target interface.
Optionally, at least two interfaces may be provided for user selection or automatic matching when determining at least one target interface of the second application or service that matches the first object. Optionally, when the automatic matching is adopted, determining a target interface of the second application or service, which is matched with the first object, includes at least one of the following:
determining a target interface of the second application or service matching the first object based on the usage data;
determining a target interface of a second application or service matched with the first object according to the function or type of the first object;
and determining a target interface of the second application or service matched with the first object according to the operation scene of the first object.
Optionally, a target interface of the second application or service matching the first object is determined according to the usage data, for example, when the user selects a file, the file is usually forwarded to a designated group of WeChat, a chat interface of the designated group is the target interface, or when the picture is selected, the picture is usually published to a friend circle, a friend circle editing interface of the WeChat is the target interface.
Optionally, the target interface of the second application or service, which is matched with the first object, is determined according to the function or type of the first object, for example, when a ring tone is selected, a setting interface of a social application or an incoming call application is determined to correspond to the target interface according to the function, or when a picture is selected, a picture processing interface of a picture editing application can be determined to correspond to the target interface according to the type.
Optionally, the target interface of the second application or service matching the first object is determined according to the operation scene of the first object, the operation scene comprises time and/or place, for example, during working hours or when a file is selected in a company, and the matching target interface is a work group of enterprise WeChat.
Alternatively, the first preset interface may be a portion of the target interface associated with the first object, such as an add picture region of a friend circle editing interface for WeChat. Optionally, the first preset interface may also be an overall thumbnail of the target interface. Thus, when selecting an object, the object may be displayed in the first preset interface, providing preliminary information for the user to confirm whether the selected object is wrong.
Optionally, the meeting of the first preset condition includes at least one of clicking a preset button, long-pressing the first preset interface, and re-pressing the first preset interface.
Optionally, the executing the preset processing includes at least one of:
outputting the content and/or attributes of the first object;
opening the content and/or attributes of the first object through a second application or service;
the first object is copied, applied, transferred or shared in whole or in part to a second application or service.
The flow of executing the predetermined processing is the same as the related steps in the first embodiment, and is not described herein again.
Optionally, in order to make the operation process more flexible, before the step S300, at least one of the following is further included:
in response to a fourth operation on a second object in the source interface, adding the second object to the first preset interface;
in response to a fifth operation on the first object in the first preset interface, removing the first object from the first preset interface;
responding to the closing of the first preset interface, and displaying the source interface in an amplifying way;
and responding to the closing of the source interface, and outputting an original interface of the second application or service, which corresponds to the first preset interface, wherein optionally, the original interface comprises an object in the first preset interface.
Optionally, the object may be continuously added to the first preset interface, or the added object may be deleted. After the first preset interface is closed, the source interface is displayed in an enlarged mode, and the user can restart the operation or end the operation. After the source interface is closed, the original interface, namely the target interface, is entered, and at this time, the original interface includes the object in the first preset interface, so that the related operation, such as continuing to chat or editing the file, can be performed in the original interface.
As shown in fig. 8, for an operation interface of this embodiment, a received file 1 is displayed in a chat interface 10 of a WeChat, when a user needs to forward the file 1, the file 1 is long-pressed, and the file 1 is selected as a first object 11, at this time, on the basis of the chat interface 10, a preset interface 21 of the WeChat and/or a preset interface 22 of a nailing application are further displayed, where the preset interface 21 and the preset interface 22 are both interfaces related to the first object 11 and are displayed in a thumbnail mode, and the preset interface 21 and the preset interface 22 are, for example, two different workgroup chat interfaces. At this time, the preset interface 21 and the preset interface 22 both display the file 1, and the user may select to continue to add the file 2 to the preset interface 21 and/or the preset interface 22. After confirming that the selected file is correct, the user can click the confirmation button in the preset interface 21 and/or the preset interface 22, that is, the corresponding file can be sent in the preset interface 21 and the preset interface 22, the whole process is more intuitive and convenient, and the misoperation probability is low.
In the operation method, in response to a first operation on at least one first object, a source interface of the first object and at least one first preset interface corresponding to at least one second application or service are displayed, and optionally, the first preset interface includes the first object; and executing preset processing through a second application or service in response to the second operation meeting the first preset condition. By the method, when the processing operation is carried out, the source interface of the object to be processed and the application interface for processing the object to be processed are displayed at the same time, so that a user can conveniently check and confirm the selected object, for example, whether the selection is missed, multiple or wrong is confirmed, the operation is convenient, and the misoperation is effectively reduced.
The embodiment of the present application further provides an intelligent terminal, including: the device comprises a memory and a processor, wherein the memory stores an operating program, and the operating program realizes the steps of the operating method in any one of the above embodiments when being executed by the processor.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps of the operation method in any one of the above embodiments are implemented.
In the embodiments of the intelligent terminal and the computer-readable storage medium provided in the present application, all technical features of any one of the above-described embodiments of the operation method may be included, and the expanding and explaining contents of the specification are basically the same as those of the above-described embodiments of the method, and are not described herein again.
Embodiments of the present application further provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method as in the above various possible embodiments.
Embodiments of the present application further provide a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method in the above various possible embodiments.
It is to be understood that the foregoing scenarios are only examples, and do not constitute a limitation on application scenarios of the technical solutions provided in the embodiments of the present application, and the technical solutions of the present application may also be applied to other scenarios. For example, as can be known by those skilled in the art, with the evolution of system architecture and the emergence of new service scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device in the embodiment of the application can be merged, divided and deleted according to actual needs.
In the present application, the same or similar term concepts, technical solutions and/or application scenario descriptions will be generally described only in detail at the first occurrence, and when the description is repeated later, the detailed description will not be repeated in general for brevity, and when understanding the technical solutions and the like of the present application, reference may be made to the related detailed description before the description for the same or similar term concepts, technical solutions and/or application scenario descriptions and the like which are not described in detail later.
In the present application, each embodiment is described with emphasis, and reference may be made to the description of other embodiments for parts that are not described or illustrated in any embodiment.
All possible combinations of the technical features in the embodiments are not described in the present application for the sake of brevity, but should be considered as the scope of the present application as long as there is no contradiction between the combinations of the technical features.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, a controlled terminal, or a network device) to execute the method of each embodiment of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or at least two computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). Computer-readable storage media can be any available media that can be accessed by a computer or a data storage device, such as a server, data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, storage Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (14)

1. A method of operation, comprising the steps of:
s1: responding to a first operation on at least one object to be processed, and outputting the object to be processed on a first preset interface;
s3: responding to a second operation, and performing first preset treatment on the object to be treated in the first preset interface through at least one second application or service;
the method further comprises the steps of:
s2: responding to a fourth operation, and performing second preset treatment on the object to be treated in the first preset interface to update the object to be treated in the first preset interface;
the step of S2 includes:
responding to the fourth operation of adding the object to be processed, and displaying an interface associated with a source interface of the existing object to be processed in the first preset interface and/or at least one associated object associated with the existing object to be processed in the first preset interface;
and responding to the fact that the operation on the source interface and/or the associated object meets a second preset condition, and adding the corresponding object in the source interface and/or the associated object in the first preset interface as an object to be processed.
2. The method according to claim 1, wherein the step of S1 includes:
s11: in response to a first operation on at least one object to be processed, acquiring object information of the object to be processed;
s12: and outputting the object to be processed on a first preset interface according to the object information.
3. The method of claim 2, wherein the step of S12 includes at least one of:
outputting the object to be processed on a first preset interface according to the output form corresponding to the object information of the object to be processed;
when the number of the objects to be processed is at least two, outputting the associated objects to be processed in the same area in the first preset interface according to the object information;
and when the number of the objects to be processed is at least two, outputting the objects to be processed in the first preset interface according to a preset sequence according to the matching result of the object information and the use data.
4. The method according to claim 3, wherein the outputting the associated object to be processed in the same area of the first preset interface according to the object information includes at least one of:
tiling and displaying the objects to be processed in each area;
adjusting the display ratio of each area according to the number of the objects to be processed in each area;
adjusting the display ratio of each area according to the type of the object to be processed in each area;
the different regions are displayed side by side or at least partially stacked.
5. The method according to any one of claims 1 to 4, wherein the step S3 includes:
s31: responding to a second operation, and outputting a second preset interface, wherein the second preset interface comprises at least one second application or service identifier;
s32: and responding to that the third operation of the identifier of the second application or service meets a first preset condition, and performing first preset processing through the second application or service.
6. The method of claim 5, wherein the performing the first predetermined process comprises at least one of:
outputting the content and/or attribute of the corresponding object to be processed;
opening the content and/or attribute of the corresponding object to be processed through the second application or service;
and copying, applying, transferring or sharing the corresponding object to be processed to the second application or service.
7. The method according to claim 1, wherein the second preset condition is met, and the method comprises at least one of the following:
dragging the object to the first preset interface;
dragging the object to a preset area different from the source interface and the first preset interface;
selecting an object and swinging towards the first preset interface;
the trajectory surrounds or covers at least one object.
8. A method of operation, comprising the steps of:
s10: responding to that a first operation on at least one first object and at least one second object accords with a first preset condition, outputting the first object and the second object in a preset form, wherein the preset form is suspension, and continuously keeping after the first operation disappears;
s21: responding to that third operation on the first object and/or the second object in the preset form conforms to a third preset condition, and outputting an editing interface of the corresponding object;
s22: responding to the operation on the editing interface, and updating, deleting and/or adding at least one object in the preset form;
s30: and responding to that the second operation of the at least one object in the preset form meets a second preset condition, and performing preset processing through at least one second application or service.
9. The method of claim 8, wherein the predetermined configuration comprises at least one of: mark suspension and content suspension;
and/or when the first object and/or the second object are output in a preset form, the first object and/or the second object move to a first preset area to be displayed or are displayed above the original position.
10. The method of claim 8, wherein the step of S22 includes at least one of:
responding to an operation of selecting a part of content of a corresponding object, and adding at least one object in the preset form according to the selected content;
responding to the operation of splitting the content of the corresponding object, and adding at least one object in the preset form according to the split content;
responding to an operation of combining at least part of contents of the first object and the second object, and adding at least one object in the preset form according to the combined contents;
in response to an operation of deleting a part of the content of the corresponding object, updating the corresponding object according to the deleted content;
and responding to the operation of adding the object, and adding at least one object in the preset form.
11. The method according to any one of claims 8 to 10, wherein said second preset condition is met, including at least one of:
dragging the first object and/or the second object to a second preset area;
pressing or pressing the first object and/or the second object for a long time;
dragging the first object and the second object to overlap.
12. The method according to any one of claims 8 to 10, wherein the step S30 includes:
responding to that a second operation on at least one object in the preset form meets a second preset condition, and outputting a first preset interface, wherein the first preset interface comprises an identifier of at least one second application or service;
and in response to the fact that the operation of the identifier of the second application or service meets a fourth preset condition, performing preset processing through at least one second application or service.
13. An intelligent terminal, characterized in that, intelligent terminal includes: memory, processor, wherein the memory has stored thereon an operating program which, when executed by the processor, carries out the steps of the operating method according to any one of claims 1 to 12.
14. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the operating method according to any one of claims 1 to 12.
CN202210338700.6A 2022-03-07 2022-04-01 Operation method, intelligent terminal and storage medium Active CN114510166B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210338700.6A CN114510166B (en) 2022-04-01 2022-04-01 Operation method, intelligent terminal and storage medium
PCT/CN2023/078276 WO2023169236A1 (en) 2022-03-07 2023-02-24 Operation method, intelligent terminal, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210338700.6A CN114510166B (en) 2022-04-01 2022-04-01 Operation method, intelligent terminal and storage medium

Publications (2)

Publication Number Publication Date
CN114510166A CN114510166A (en) 2022-05-17
CN114510166B true CN114510166B (en) 2022-08-26

Family

ID=81555449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210338700.6A Active CN114510166B (en) 2022-03-07 2022-04-01 Operation method, intelligent terminal and storage medium

Country Status (1)

Country Link
CN (1) CN114510166B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023169236A1 (en) * 2022-03-07 2023-09-14 深圳传音控股股份有限公司 Operation method, intelligent terminal, and storage medium

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8788947B2 (en) * 2011-06-14 2014-07-22 LogMeln, Inc. Object transfer method using gesture-based computing device
CN104737197B (en) * 2012-06-29 2019-12-06 高通股份有限公司 Sharing user interface objects via a shared space
CN105430168A (en) * 2015-10-30 2016-03-23 努比亚技术有限公司 Mobile terminal and file sharing method
CN108762954B (en) * 2018-05-29 2021-11-02 维沃移动通信有限公司 Object sharing method and mobile terminal
CN109271078A (en) * 2018-09-17 2019-01-25 深圳市泰衡诺科技有限公司 Content share method, terminal device and storage medium
EP4373205A2 (en) * 2018-09-30 2024-05-22 Huawei Technologies Co., Ltd. File transfer method and electronic device
CN109600303B (en) * 2018-12-04 2021-08-17 北京小米移动软件有限公司 Content sharing method and device and storage medium
CN110865744B (en) * 2019-09-30 2021-12-14 华为技术有限公司 Split-screen display method and electronic equipment
CN111290675B (en) * 2020-03-02 2023-02-17 Oppo广东移动通信有限公司 Screenshot picture sharing method and device, terminal and storage medium
CN111694975B (en) * 2020-06-09 2023-08-11 维沃移动通信(杭州)有限公司 Image display method, device, electronic equipment and readable storage medium
CN113934330A (en) * 2020-06-29 2022-01-14 华为技术有限公司 Screen capturing method and electronic equipment
CN111857468A (en) * 2020-07-01 2020-10-30 Oppo广东移动通信有限公司 Content sharing method and device, equipment and storage medium
CN111858522A (en) * 2020-08-06 2020-10-30 Oppo广东移动通信有限公司 File sharing method and device, terminal and storage medium
US20230367464A1 (en) * 2020-09-16 2023-11-16 Huawei Technologies Co., Ltd. Multi-Application Interaction Method
CN112416223A (en) * 2020-11-17 2021-02-26 深圳传音控股股份有限公司 Display method, electronic device and readable storage medium
CN112486385A (en) * 2020-11-30 2021-03-12 维沃移动通信有限公司 File sharing method and device, electronic equipment and readable storage medium
CN113055525A (en) * 2021-03-30 2021-06-29 维沃移动通信有限公司 File sharing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN114510166A (en) 2022-05-17

Similar Documents

Publication Publication Date Title
CN107390972B (en) Terminal screen recording method and device and computer readable storage medium
CN114327189B (en) Operation method, intelligent terminal and storage medium
CN114371803B (en) Operation method, intelligent terminal and storage medium
CN109697008B (en) Content sharing method, terminal and computer readable storage medium
CN113487705A (en) Image annotation method, terminal and storage medium
CN113194197A (en) Interaction method, terminal and storage medium
CN114510166B (en) Operation method, intelligent terminal and storage medium
CN113342234A (en) Display control method, mobile terminal and storage medium
CN114594886B (en) Operation method, intelligent terminal and storage medium
CN114595007A (en) Operation method, intelligent terminal and storage medium
CN114691276B (en) Application processing method, intelligent terminal and storage medium
WO2023108444A1 (en) Image processing method, intelligent terminal, and storage medium
CN113835586A (en) Icon processing method, intelligent terminal and storage medium
WO2023169236A1 (en) Operation method, intelligent terminal, and storage medium
WO2023050910A1 (en) Icon display method, intelligent terminal and storage medium
CN114327184A (en) Data management method, intelligent terminal and storage medium
CN115480681A (en) Control method, intelligent terminal and storage medium
CN115237296A (en) Message notification method, intelligent terminal and storage medium
CN113326537A (en) Data processing method, terminal device and storage medium
CN116302280A (en) Prompt method, intelligent terminal and storage medium
CN114003751A (en) Information processing method, intelligent terminal and storage medium
CN117687593A (en) Interaction method, intelligent terminal and storage medium
CN113568532A (en) Display control method, mobile terminal and storage medium
CN113448467A (en) Control method, mobile terminal and storage medium
CN115022270A (en) Image processing method, intelligent terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant