CN115696306A - Sharing method, intelligent terminal and storage medium - Google Patents

Sharing method, intelligent terminal and storage medium Download PDF

Info

Publication number
CN115696306A
CN115696306A CN202211185979.5A CN202211185979A CN115696306A CN 115696306 A CN115696306 A CN 115696306A CN 202211185979 A CN202211185979 A CN 202211185979A CN 115696306 A CN115696306 A CN 115696306A
Authority
CN
China
Prior art keywords
content
shared
sharing
application
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211185979.5A
Other languages
Chinese (zh)
Inventor
王新东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Transsion Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Transsion Holdings Co Ltd filed Critical Shenzhen Transsion Holdings Co Ltd
Priority to CN202211185979.5A priority Critical patent/CN115696306A/en
Publication of CN115696306A publication Critical patent/CN115696306A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Transfer Between Computers (AREA)

Abstract

The application provides a sharing method, an intelligent terminal and a storage medium, wherein the sharing method can be applied to the intelligent terminal and comprises the following steps: determining or generating at least one content to be shared based on the first trigger action and/or the system information; determining or generating target sharing content and/or a sharing path according to a second trigger action on the content to be shared; and executing a sharing action according to the sharing path and/or the target sharing content. According to the content sharing method and device, after the content to be shared is obtained based on the first trigger action and/or the system information, the target shared content and/or the shared path are determined, then based on the shared path and/or the target shared content, the content sharing action can be achieved based on the second trigger action, the sharing efficiency is improved, and therefore user experience is improved.

Description

Sharing method, intelligent terminal and storage medium
Technical Field
The application relates to the technical field of terminals, in particular to a sharing method, an intelligent terminal and a storage medium.
Background
In the related art, in order to realize content sharing of different applications on the same terminal page, a content of a certain pull-out application needs to be selected, a pull-in application needs to be determined, and then the related content is pasted to the pull-in application.
In the course of conceiving and implementing the present application, the inventors found that at least the following problems existed: when a plurality of applications are displayed on a display interface of a terminal, after the content to be shared is determined, the content is copied, and then the content is pasted to an input interface of the application to be shared, or the content needs to be stored first, and the content is searched from a storage position of the content, so that the content is shared.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
In view of the above technical problems, the present application provides a sharing method, an intelligent terminal and a storage medium, so that a user can quickly implement a sharing action on a content to be shared, and the operation is simple and convenient.
The application provides a sharing method, which can be applied to an intelligent terminal and comprises the following steps:
s110: determining or generating at least one content to be shared based on the first trigger action and/or the system information;
s120: determining or generating target sharing content and/or a sharing path according to a second trigger action on the content to be shared;
s130: and executing a sharing action according to the sharing path and/or the target sharing content.
Optionally, the step S110 includes at least one of:
detecting a content dragging operation and/or a content selecting operation aiming at a source application, and determining or generating the content to be shared according to the content dragging operation and/or the content selecting operation;
determining current time according to the system information, determining a target file of which the time interval with the current time is smaller than a preset time interval according to the editing time of at least one file, and determining or generating the target file as the content to be shared;
determining current positioning information and current time according to the system information, and determining or generating the content to be shared according to the current positioning information and the current time;
and determining a current service scene according to the system information, acquiring files meeting the current service scene, and determining or generating the files meeting the current service scene as the content to be shared.
Optionally, the step S120 includes:
determining or generating target shared content, a stopping position and/or stopping duration corresponding to the stopping position according to the second trigger action;
and determining or generating the sharing path according to the stop position and/or the stop time.
Optionally, the step of determining or generating the sharing path according to the stopping position and/or the stopping duration includes:
when the staying time length is greater than or equal to a preset time length, determining or generating an application to be shared corresponding to the staying position;
and determining or generating the sharing path according to the application to be shared.
Optionally, the manner of determining or generating the sharing path includes at least one of:
acquiring a current business scene, determining or generating an application to be shared according to the current business scene, and determining or generating the sharing path according to the application to be shared;
obtaining historical sharing data, determining or generating an application to be shared according to the historical sharing data, and determining or generating the sharing path according to the application to be shared.
Optionally, the method further comprises:
when the staying position corresponding to the second trigger action is detected to be located in a first preset area of a first screen, the staying position is switched to a second preset area of a second screen, the staying position and/or the staying duration of the second trigger action in the second preset area are/is obtained, and the sharing path is determined or generated according to the staying position and/or the staying duration; and/or the presence of a gas in the gas,
and when the stop position corresponding to the second trigger action is detected to be located in a third preset area of the display screen of the intelligent terminal, the target sharing content is sent to the target terminal.
Optionally, the S130 step includes at least one of:
sending the target sharing content to an application to be shared corresponding to the sharing path;
converting the target sharing content into target sharing content in a preset format, and sending the target sharing content in the preset format to an application to be shared corresponding to the sharing path;
and encapsulating the target sharing content in an intent object, and calling an activity interface of the application to be shared so as to transmit the intent object to the application to be shared through the activity interface.
The application also provides a sharing method, which can be applied to the intelligent terminal and comprises the following steps:
s210: determining or generating a sharing path based on the first trigger action;
s220: determining or generating at least one content to be shared according to the sharing path;
s230: determining or generating target sharing content according to a second trigger action on the content to be shared;
s240: and executing a sharing action according to the sharing path and the target sharing content.
Optionally, the S220 includes:
determining or generating an application to be shared according to the sharing path;
acquiring at least one content matched with the attribute information according to the attribute information of the application to be shared;
and determining at least one content matched with the attribute information as the content to be shared.
The application also provides an intelligent terminal, including: the device comprises a memory and a processor, wherein the memory stores a sharing program, and the sharing program realizes the steps of any one of the sharing methods when being executed by the processor.
The present application further provides a storage medium storing a computer program, which when executed by a processor implements the steps of any of the sharing methods described above.
As described above, the sharing method of the present application, which can be applied to an intelligent terminal, includes the steps of: s110: determining or generating at least one content to be shared based on the first trigger action and/or the system information; s120: determining or generating target sharing content and/or a sharing path according to a second trigger action on the content to be shared; s130: and executing a sharing action according to the sharing path and/or the target sharing content. By the technical scheme, the rapid sharing function of the shared content can be realized, the problem that the data sharing efficiency is low due to the fact that the application to be dragged needs to be selected from a plurality of applications when the data are shared, and the content to be shared can be shared by the application to be dragged is solved, and user experience is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive step.
Fig. 1 is a schematic diagram of a hardware structure of an intelligent terminal implementing various embodiments of the present application;
fig. 2 is a communication network system architecture diagram according to an embodiment of the present application;
fig. 3 is a flowchart illustrating a sharing method according to a first embodiment;
fig. 4 is a detailed flowchart of step S120 of the sharing method according to the first embodiment;
fig. 5 is a sharing interface and a target interface of the sharing method shown according to the first embodiment;
fig. 6 is a detailed flowchart of step S1202 of the sharing method according to the first embodiment;
fig. 7 is a schematic specific flowchart of a sharing method according to a first embodiment;
fig. 8 is a flowchart illustrating a sharing method according to a second embodiment;
fig. 9 is a flowchart illustrating a sharing method according to a third embodiment;
fig. 10 is a detailed flowchart of step S220 of the sharing method according to the third embodiment.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings. With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element, and further, components, features, elements, and/or steps that may be similarly named in various embodiments of the application may or may not have the same meaning, unless otherwise specified by its interpretation in the embodiment or by context with further embodiments.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if," as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination," depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or," "and/or," "including at least one of the following," and the like, as used herein, are to be construed as inclusive or mean any one or any combination. For example, "includes at least one of: A. b, C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C ", further for example," A, B or C "or" A, B and/or C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C'. An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It should be noted that step numbers such as S110 and S120 are used herein for the purpose of more clearly and briefly describing the corresponding contents, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S120 first and then S110 in the specific implementation, but these steps should be within the scope of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
The intelligent terminal may be implemented in various forms. For example, the smart terminal described in the present application may include smart terminals such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and fixed terminals such as a Digital TV, a desktop computer, and the like.
While the following description will be given by way of example of a smart terminal, those skilled in the art will appreciate that the configuration according to the embodiments of the present application can be applied to a fixed type terminal in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of an intelligent terminal for implementing various embodiments of the present application, the intelligent terminal 100 may include: RF (Radio Frequency) unit 101, wiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the intelligent terminal architecture shown in fig. 1 does not constitute a limitation of the intelligent terminal, and that the intelligent terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
The following specifically describes each component of the intelligent terminal with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000 (Code Division Multiple Access 2000 ), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex-Long Term Evolution), TDD-LTE (Time Division duplex-Long Term Evolution ), 5G (Global System for Mobile communications, or the like).
WiFi belongs to short-distance wireless transmission technology, and the intelligent terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the smart terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the smart terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the smart terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The smart terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Optionally, the light sensor includes an ambient light sensor and a proximity sensor, the ambient light sensor may adjust the brightness of the display panel 1061 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1061 and/or the backlight when the smart terminal 100 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the gesture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the intelligent terminal. Alternatively, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Optionally, the touch detection device detects a touch orientation of a user, detects a signal caused by a touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Optionally, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited thereto.
Alternatively, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the smart terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the smart terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device can be connected to the intelligent terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the smart terminal 100 or may be used to transmit data between the smart terminal 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a program storage area and a data storage area, and optionally, the program storage area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, etc. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the intelligent terminal, connects various parts of the entire intelligent terminal using various interfaces and lines, performs various functions of the intelligent terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby integrally monitoring the intelligent terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor, optionally, the application processor mainly handles operating systems, user interfaces, application programs, etc., and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The intelligent terminal 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and preferably, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
Although not shown in fig. 1, the smart terminal 100 may further include a bluetooth module or the like, which is not described herein.
In order to facilitate understanding of the embodiments of the present application, a communication network system on which the intelligent terminal of the present application is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present disclosure, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an e-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an epc (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Optionally, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Alternatively, the eNodeB2021 may be connected with other enodebs 2022 through a backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an hss (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a pgw (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. Optionally, the MME2031 is a control node that handles signaling between the UE201 and the EPC203, providing bearer and connection management. HSS2032 is used to provide some registers to manage functions such as home location register (not shown) and holds some user-specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present application is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, 5G, and future new network systems (e.g. 6G), and the like.
Based on the above intelligent terminal hardware structure and communication network system, various embodiments of the present application are provided.
First embodiment
Referring to fig. 3, fig. 3 shows a first embodiment of a sharing method according to an embodiment of the present application, the method including the steps of:
s110: determining or generating at least one content to be shared based on the first trigger action and/or the system information;
s120: determining or generating target sharing content and/or a sharing path according to a second trigger action on the content to be shared;
s130: and executing a sharing action according to the sharing path and/or the target sharing content.
In the embodiment of the application, the sharing method can be applied to the intelligent terminal, and the content to be shared includes, but is not limited to, pictures, text content, mails, documents, voice information and link information. Optionally, the S110 step includes at least one of:
detecting a content dragging operation and/or a content selecting operation aiming at a source application, and determining or generating the content to be shared according to the content dragging operation and/or the content selecting operation;
determining current time according to the system information, determining a target file of which the time interval with the current time is smaller than a preset time interval according to the editing time of at least one file, and determining or generating the content to be shared according to the target file;
determining current positioning information and current time according to the system information, and determining or generating the content to be shared according to the current positioning information and the current time;
determining a current service scene according to the system information, acquiring a file meeting the current service scene, and determining or generating the content to be shared according to the file meeting the current service scene.
Optionally, the intelligent terminal is provided with a dragging and monitoring interface, and the dragging and monitoring interface is used for monitoring whether content is dragged to a preset area or not and taking the content in the preset area as content to be shared. Optionally, it is determined that the content to be shared is determined according to the first trigger action, the first trigger action is used for indicating the content to be shared, the first trigger action includes a content dragging operation, the content dragging operation includes a selection operation for the content to be shared, a dragging operation continuous with the content selection operation, and a releasing operation continuous with the dragging operation, the content dragging operation includes that a user triggers the selection operation for the content after touching the content, and then determines whether the content is dragged to a preset area through a dragging track of the dragging operation and the releasing operation, when it is monitored by the dragging monitoring interface that the content is dragged to the preset area, the content is taken as the content to be shared, the preset area may be a sidebar area of a display screen of the smart terminal, and when the sidebar is called by the user, the content to be shared is displayed in the sidebar area. Optionally, in order to reduce the display space, when the content to be shared is displayed, the content to be shared is converted into corresponding prompt information, where the prompt information may be a thumbnail, and may also be according to the corresponding hover after the content to be shared is converted, and the user may know the corresponding content to be shared based on the prompt information. Optionally, the preset area may also be an upper area and a lower area of the display screen, which is not limited herein.
Optionally, in actual operation, when it is monitored that a user triggers a content selection operation, a sharing interface can be displayed in the preset area to prompt the position of the preset area, the user drags the content to the sharing interface through the content dragging operation, the sharing interface is overlapped on the preset area, the content to be shared is displayed in the sharing interface, and the user can know the content to be shared selected by the user based on the sharing interface.
Optionally, the content dragging operation may be implemented as dragging operation of at least one content to be shared, and when the dragging monitoring interface monitors that at least two different contents are dragged to a preset area within a preset time period, the dragging monitoring interface takes the at least two contents within the preset time period as the content to be shared. Optionally, the content dragging operation may be implemented by multiple dragging operations, or may be implemented by dragging a plurality of contents to be shared at one time in one dragging operation, so as to improve the sharing efficiency.
Optionally, the first trigger action further includes a content selection operation, the content selection operation is implemented by a long-press operation of a user, when it is detected that the user performs the long-press operation on the content, it is detected whether the long-press operation meets a preset condition, the preset condition includes that the long-press time of the long-press operation is greater than or equal to a preset time, whether the content corresponding to the long-press operation meets at least one item of preset content, when the long-press operation meets the preset condition, the content selection operation is determined to be triggered, and the content corresponding to the long-press operation is used as the content to be shared.
Optionally, after editing a file, a user may need to share the file that is just completed, or when the user uses an application of an intelligent terminal, the user browses an interesting picture, and after downloading the interesting picture, the user may want to execute the sharing operation on the picture that is just downloaded. In this way, the content to be shared can be determined according to system information, the system information comprises the current time, the content needing to be shared by the user is predicted according to the current time, and the predicted content is used as the shared content.
Optionally, determining current time according to the system information, comparing the editing time of at least one file with the current time, determining a time interval between the editing time of at least one file and the current time, comparing a time interval corresponding to at least one file, taking a file with a time interval smaller than a preset time interval as a target file, and determining or generating the content to be shared according to the target file. Optionally, the file includes, but is not limited to, text content, a picture, a mail, voice information, and link information, the editing time is determined according to a time point of an editing operation of the file, the editing operation includes a file downloading operation, a file browsing operation, a file modifying operation, a file receiving operation, a file sending operation, and the like, so that the editing time may be a modification time point at which the file is modified most recently, a file receiving time point, a file downloading time point, a file browsing time point, and the preset time interval may be a minimum time interval between time intervals of other files except the file, so that the file with the smallest time interval is used as the target file, and may also be set in a user-defined manner, for example, 1 hour, so that the file edited in the last hour is used as the target file, and optionally, the mode of determining or generating the content to be shared according to the target file includes directly using the target file as the content to be shared, and the target file after the preset operation is used as the content to be shared.
Exemplarily, if the target file is voice information, the preset operation is a voice-to-text operation, after the voice-to-text operation is performed on the target file, text content of the target file is determined or generated, and the text content is used as the content to be shared; if the target file is text content, the preset operation can be a text-to-picture operation, after the text-to-picture operation is carried out on the target file, a picture of the target file is determined or generated, the picture is used as the target content, if the text content is smile, the text content is converted into a corresponding smile picture, and therefore interestingness is increased and visualization of the text content is improved; the preset operation can also be a text abstract extraction operation, and when the target file is text content, the text abstract extraction operation is carried out on the target file so as to determine or generate abstract information of the target file, and the abstract information is used as content to be shared.
Optionally, the preset operation includes, but is not limited to, the above operation, and may further include a picture compression operation, for example, when the target file is a picture, a compressed graph of the target file is generated by a bicubic compression algorithm, the compressed graph is used as the content to be shared, so as to reduce bandwidth pressure consumed for executing a sharing action, and subsequently, the shared content is received at a receiving end, and the target file is subjected to super-resolution reduction by using a super-resolution algorithm, so as to obtain an original picture through reduction. Optionally, the preset operation further includes a file compression operation, the target file is converted into a corresponding compression packet through the file compression operation, and the compression packet is used as the content to be shared.
Optionally, when the user is currently located in a work unit, the content to be shared may be content related to work, and when the user is located in a tourist attraction, the content to be shared may be a currently shot landscape picture or the like.
Optionally, the system information includes current positioning information and current time, and the content to be shared is determined or generated according to the current positioning information and the current time. Optionally, a first file with a time interval smaller than a preset time interval with the current time is determined based on the editing time of at least one file, then the first file matched with the positioning information is determined according to the positioning information, the first file matched with the positioning information is used as a target file, the content to be shared is determined or generated according to the target file, and the first file matched with the positioning information may be determined by obtaining an associated address of at least one first file, and when the associated address is matched with the positioning information, the first file is determined to be matched with the positioning information.
Illustratively, a picture stored in the intelligent terminal is associated with a shooting position, the shooting position is used as an associated address of the picture, and the picture with the associated address consistent with the current positioning information is used as the content to be shared; the method comprises the steps that attribute information of a document in an intelligent terminal comprises a storage address, the storage address is used as a related address of the document, the related address is used for indicating whether the document is content related to work or not, when a user is determined to be located in a work unit according to positioning information, whether the document is the content related to work or not is determined according to the related address of at least one document, and when the document is determined to be the content related to work, the content to be shared is determined or generated according to the document.
Optionally, when the current service scene is that the user uses a game application program, the content that the user needs to share may be a game invitation link, and when the current service scene is that the user makes a selective purchase in a shopping application program, the content that the user needs to share may be a commodity content. After the current business scene is determined, obtaining a file meeting the current business scene, determining or generating the content to be shared according to the file meeting the current business scene, exemplarily, when a user uses a shopping application program for shopping, generating a picture based on a current display page or extracting a webpage link of the current display page, and taking the picture or the webpage link as the file meeting the current business scene.
Optionally, after at least one content to be shared is determined or generated, the target shared content and/or the sharing path is determined or generated according to a second trigger action on the content to be shared. Optionally, at least one content to be shared is determined or generated, and at least one content to be shared is displayed on a sharing interface on a preset area, so that a user executes a second trigger action based on the displayed content to be shared, the second trigger action is used for determining or generating target shared content and/or a sharing path, the target shared content is at least one item of content in the content to be shared, and the sharing path is used for indicating a path through which the content is shared to an application to be shared.
Illustratively, content to be shared is uploaded to a server first, and then the content is shared to an application to be shared through the server, or the content to be shared is cached in a preset cache region first, the content to be shared cached in the preset cache region is called, the content to be shared is shared to the application to be shared, or the content to be shared is transmitted to the application to be shared by calling a startActivity interface of the application to be shared, the sharing path is related to a source application corresponding to the content to be shared and/or the application to be shared for receiving the content to be shared, and the application to be shared is a receiving end for receiving the content to be shared.
Optionally, the manner of determining or generating the sharing path includes at least one of:
acquiring a current business scene, determining or generating an application to be shared according to the current business scene, and determining or generating the sharing path according to the application to be shared;
obtaining historical sharing data, determining or generating an application to be shared according to the historical sharing data, and determining or generating the sharing path according to the application to be shared.
Optionally, the manner of determining or generating the application to be shared may be determined according to a current business scenario, and after determining the application to be shared according to the current business scenario, the sharing path is determined or generated according to the application to be shared. Optionally, different applications corresponding to different service scenes may be, for example, a working application if the current service scene is a working scene, and a communication application such as wechat if the current service scene is a game scene. In the embodiment of the application, after the content to be shared is determined, a current service scene is obtained, the application corresponding to the current service scene is determined according to the corresponding relationship, the application corresponding to the current service scene is determined or generated according to the application corresponding to the current service scene, optionally, the corresponding relationship between the service scene and the application can be set by a user, or can be determined according to a historical service scene and the historical application, historical sharing data used by a user for the intelligent terminal is collected, the historical service scene and use data corresponding to the historical service scene are extracted according to the historical sharing data, the use data comprise the application shared by the content to be shared by the user when the service scene is the historical service scene, and the historical service scene and the use data corresponding to the historical service scene are input to a neural network model so that the neural network model is trained based on the historical service scene and the use data corresponding to the historical service scene, so that the corresponding relationship between the service scene and the application is determined or generated.
Optionally, the current service scene may be determined according to the content to be shared, obtain a type attribute of the content to be shared, and determine the current service scene according to the type attribute, where, for example, when the content to be shared is content related to work, the current service scene is determined to be a work scene; and when the content to be shared is a game invitation link, determining that the current service scene is a game scene. Optionally, the current service scenario may also be determined according to at least one of a current application running in the foreground, a current time, and current positioning information, which is not limited herein.
Optionally, the mode of determining or generating the application to be shared may also be based on historical sharing data, and after determining the application to be shared according to the historical sharing data, the sharing path is determined or generated according to the application to be shared. Optionally, the historical sharing data includes each time of sharing behavior data, where the sharing behavior data includes, but is not limited to, a sharing time, a sharing content, a sharing location, a sharing source application, and a sharing receiving application, the sharing source application is a source-side application of the sharing content, the sharing receiving application is a receiving-side application that receives the sharing content, and at least one of the content to be shared, the current time point, the current positioning information, and the source application corresponding to the content to be shared is matched according to the historical sharing data to obtain a sharing receiving application matched with the content to be shared, and the application to be shared is determined or generated according to the sharing receiving application.
Optionally, the mode of determining the application to be shared may be that a current application running in a foreground is used as the application to be shared, or that an application that has completed a preloading operation of resources required for starting the application in a background.
Optionally, the application to be shared is determined or generated according to the current service scene, the history sharing data, the current application running in the foreground, or the application which has completed the preloading operation, and then the content to be shared is directly shared to the input window of the application to be shared, so that the data sharing efficiency is improved.
Optionally, when the content to be shared includes at least two, an embodiment of the present application further provides a manner of determining or generating a target shared content and/or a target shared path, with reference to fig. 4, where the S120 includes:
s1201: determining or generating target sharing content, a stopping position and/or stopping duration corresponding to the stopping position according to the second trigger action;
s1202: and determining or generating the sharing path according to the stop position and/or the stop duration.
Optionally, after the content to be shared is determined or generated, the content to be shared may be displayed in a sharing interface in a preset area, and then a target interface is displayed, where the target interface displays display windows corresponding to at least one application, so that a user may select an application to be shared from the displayed at least one application based on the target interface, and determine or generate the sharing path according to the application to be shared selected by the user, for example, referring to fig. 5, fig. 5 shows a schematic diagram of displaying the sharing interface and the target interface.
Optionally, based on too many applications in the intelligent terminal, if all the applications are placed on a target interface, not only is display resources wasted, but also the user cannot quickly select the application to be shared, therefore, the display window of the predicted application obtained through prediction is displayed on the target interface by predicting the predicted application to be shared, optionally, the mode of predicting the predicted application to be shared may be determined according to the use frequency of at least one application, and the application with the use frequency greater than or equal to the preset frequency is used as the predicted application; the applications matched with the contents to be shared can be obtained according to the determination of the contents to be shared, and at least one application respectively matched with the contents to be shared is used as the prediction application; the application running in the foreground can be used as the prediction application; the current business scene can be obtained, the application matched with the current business scene is used as the prediction application and the like, the application with high sharing possibility is displayed in the target interface, and the user can quickly select the application to be shared based on the target interface, so that the data sharing efficiency is improved.
Optionally, the second trigger action includes a second content dragging operation, where the second content dragging operation includes a second selected operation of at least one content to be shared in the sharing interface and a second dragging operation that is continuous in the second selected operation. Optionally, the target sharing content is determined or generated according to a second selected operation in the second content dragging operation, a stopping position and/or a stopping duration corresponding to the stopping position are determined or generated according to the second dragging operation, and the stopping position and/or the stopping duration are used for determining an application to be shared and a sharing path.
Referring to fig. 6, the S1202 step includes:
s12021: when the stay time is longer than or equal to the preset time, determining or generating the application to be shared corresponding to the stay position;
s12022: and determining or generating the sharing path according to the application to be shared.
Optionally, when a user selects target shared content, dragging the target shared content from the sharing interface to a display position of a display window of one application displayed on the target interface through a second dragging operation, acquiring a stopping position of the second dragging operation, comparing the stopping position with the display position of the display window of at least one application to determine or generate a target display window corresponding to the stopping position, taking the application corresponding to the target display window as the application to be shared, and determining or generating the sharing path according to the application to be shared.
Optionally, in the process of executing the second drag operation by the user, a situation that a certain display window on the target interface stays for a short time but the currently-staying display window is not the display window of the application that the user needs to share may occur, therefore, in the embodiment of the present application, by obtaining the stay duration corresponding to at least one stay position, when the stay duration is greater than or equal to the preset duration, it represents that the application corresponding to the display window corresponding to the stay position is the application that the user needs to share, and then, the target display window corresponding to the stay position is determined according to the stay position, the application corresponding to the target display window is taken as the application to be shared, and then, the sharing path is determined or generated according to the application to be shared.
Optionally, after the sharing path is generated, executing a sharing action according to the sharing path and/or the target shared content, including:
sharing the target sharing content to the application to be shared corresponding to the sharing path so as to inject the target sharing content into an input window of the application to be shared;
or, sharing the content to be shared to the application to be shared corresponding to the sharing path, so as to inject the content to be shared into an input window of the application to be shared;
or the target sharing content is shared in the preset application corresponding to the preset sharing path, so that the user can share the content to be shared in the application to be shared without selecting the application to be shared based on the second trigger action.
Optionally, when the application to be shared is a communication application, the input window of the application to be shared may be an input window of a preset contact in the application to be shared, or an edit page of the application to be shared may be used as the input window when the application to be shared is another application, such as document editing software, according to the input window of a target contact currently selected by a user, and optionally, the input window is used to receive the target shared content.
Optionally, the content to be shared can be directly shared to the application to be shared according to the first trigger action without determining or generating the application to be shared through a second trigger action, and then the sharing path is determined or generated through the application to be shared. Optionally, when a user drags the content to be shared in the source application to the position of the application to be shared on the display screen of the intelligent terminal through a content dragging operation, the content to be shared is directly shared to the application as the target shared content, so that an additional sharing interface for displaying the content to be shared and an additional target interface for displaying a display window of the application are not required to be arranged on the display screen of the intelligent terminal, and the sharing action of the content to be shared can be realized in place in one step based on the display window of at least one application displayed in the display screen of the intelligent terminal.
Optionally, when a user needs to share at least one target shared content at the same time, all target shared contents may be stored through setting a sharing interface, then an application to be shared is determined through a second trigger action initiated by the user, a sharing path is determined or generated according to the application to be shared, and then all target shared contents are shared to the application to be shared corresponding to the sharing path at one time, so that the target shared contents are injected into an input window of the application to be shared.
Optionally, when the user actually performs a content drag operation of the first trigger action or a second content drag operation of the second trigger action, after selecting content to be shared from the source application based on the selection operation of the first trigger action of the user or selecting target shared content from the sharing interface based on the second trigger action of the user, the content to be shared or the target shared content is directly dragged, and a drag identifier of the content to be shared or a drag identifier of the target shared content may also be generated.
Optionally, after a user executes a content selection operation based on a first trigger action, determining content to be shared corresponding to the content selection operation, and continuing to receive a second trigger action initiated by the user, where the second trigger action includes a sliding action, determining a stop position and/or a stop duration corresponding to the stop position according to the sliding action, and determining an application to be shared corresponding to the stop position when the stop duration is greater than or equal to a preset duration, so as to share the content to be shared selected based on the content selection operation to the application to be shared.
Optionally, in the actual sharing process, there may be an adaptive interface for receiving the target shared content, where the application to be shared does not have an adaptation, such as a chat interface for dragging data from the source application into the XX application, and the XX application does not have any reaction due to the XX application not adapting to the drag-and-drop interface, resulting in a poor user experience, and based on this, referring to fig. 7, the step S130 includes at least one of the following steps:
s1301: sending the target sharing content to an application to be shared corresponding to the sharing path;
s1302: converting the target sharing content into target sharing content in a preset format, and sending the target sharing content in the preset format to an application to be shared corresponding to the sharing path;
s1303: and encapsulating the target sharing content in an intent object, and calling an activity interface of the application to be shared so as to transmit the intent object to the application to be shared through the activity interface.
Optionally, when an adaptation interface exists between the source application corresponding to the target shared content and the application to be shared, the target shared content is directly sent to the application to be shared corresponding to the sharing path.
Optionally, in some cases, the source application and the application to be shared may be the same, and based on that the source application and the application to be shared are the same, the content to be shared may be shared to the application to be shared. Optionally, the intelligent terminal is a dual-screen mobile phone, that is, the intelligent terminal includes a first screen and a second screen, where the first screen is the first display area, and the second screen is the second display area.
Optionally, when no adaptive interface exists between the source application corresponding to the target sharing content and the application to be shared, the target sharing content may be converted into a target sharing content in a preset format, and the target sharing content in the preset format is sent to the application to be shared corresponding to the sharing path, where the preset format is a pasting board format, so that the target sharing content is shared to the application to be shared through a pasting board.
Optionally, when no adaptive interface exists between the source application corresponding to the target sharing content and the application to be shared, the target sharing content may be encapsulated in an intent object, and an activity interface of the application to be shared is called, so that the intent object is transmitted to the application to be shared through the activity interface, and the target sharing content is transmitted to the application to be shared through the intent.
In the embodiment of the application, at least one content to be shared is determined or generated based on a first trigger action and/or system information, a target shared content and/or a sharing path is determined or generated according to a second trigger action on the content to be shared, and a sharing action is executed according to the sharing path and/or the target shared content, so that a user can drag the content to be shared into an input window of an application to be shared based on the first trigger action without first selecting the content to be shared, then drag the content to be shared to a sharing interface, and then receive the selection of the application to be shared by the user, and can share the content to be shared to the application to be shared, thereby improving the sharing efficiency; in addition, the method and the device for sharing the target shared content can also automatically select at least one content to be shared for the user based on the system information without self selection of the user, the at least one content to be shared is displayed on a sharing interface, the target shared content, the stop position and the stop time of a second trigger action are determined by responding to the second trigger action of the user, the application to be shared is determined according to the stop time and the stop position, and then the target shared content is directly shared into the application to be shared, so that the sharing efficiency is improved; in addition, the content to be shared is directly dragged and dropped in the sharing interface, and the plurality of contents to be shared can be simultaneously and directly sent to the application to be shared in response to a second trigger action of the user without being respectively shared, so that the sharing efficiency is further improved; in addition, the method and the device for sharing the data share content further provide that before the target share content is shared to the die share application, whether an adaptive interface matched with a source application of the target share content exists in the to-be-shared application is judged, when the to-be-shared application does not have the adaptive interface, the target share content in a preset format is converted into the target share content in the preset format, the target share content in the preset format is sent to the to-be-shared application corresponding to the share path, or the target share content is packaged in an intent object, and the activity interface of the to-be-shared application is called, so that the intent object is transmitted to the to-be-shared application through the activity interface, sharing work can be executed among all applications, the application range of data sharing is widened, and the data sharing efficiency is improved.
Second embodiment
Referring to fig. 8, according to the first embodiment, the method further includes:
s110: determining or generating at least one content to be shared based on the first trigger action and/or the system information;
s1201: determining or generating target sharing content, a stopping position and/or stopping duration corresponding to the stopping position according to the second trigger action;
s140: when the stop position corresponding to the second trigger action is detected to be located in a first preset area of a first screen, the stop position is switched to a second preset area of a second screen, the stop position and/or the stop time length of the second trigger action in the second preset area are/is obtained, the sharing path is determined or generated according to the stop position and/or the stop time length, and the sharing action is executed according to the sharing path and/or the target sharing content;
and/or the presence of a gas in the atmosphere,
s150: and when the stop position corresponding to the second trigger action is detected to be located in a third preset area of the display screen of the intelligent terminal, the target sharing content is sent to the target terminal.
In the embodiment of the application, the intelligent terminal comprises a first screen and a second screen, when a user needs to share target sharing content of a source application of the first screen to a to-be-shared application of the second screen, the source application and the to-be-shared application are located on different screens, the target sharing content cannot be shared to the to-be-shared application of another screen in the prior art, and therefore the user can continue to execute a second trigger action based on the switched stopping position by providing a stopping position detection-based application, and directly switching the stopping position to another screen according to an area where the stopping position is located, so that the content sharing between the first screen and the second screen is realized.
Optionally, the first preset area is an edge area of a display screen of the first screen, and may be an upper right corner area, and may also be a lower right corner area, and may also be an upper left corner area, and may also be a lower left corner area, it may be understood that, in a general case, the display position of the at least one application is generally located in the center of the display screen, or the input window of the application is also generally located in the center of the display screen, and is not located in the edge area of the display screen.
Optionally, before determining the stopping position, in order to prevent false triggering, an embodiment of the present application further provides that detecting whether the second triggering action satisfies a first triggering condition, where the first triggering condition includes at least one of:
the second trigger action meets the preset trigger action;
and the speed of the second dragging operation in the second trigger action is greater than or equal to the preset speed.
Optionally, when the second trigger action meets the first trigger condition, responding to the second trigger action, and determining a dwell position and/or a dwell time length of the second trigger action.
Optionally, when the stopping position is detected to be located in the first preset area, and/or the stopping time of the stopping position in the first preset area is greater than or equal to a preset time, determining or generating a position switching identifier, directly switching the position switching identifier to a second preset area of a second screen, so as to display the position switching identifier in the second preset area, so that the stopping position of the position switching identifier in the second preset area can be directly determined based on the switching identifier displayed in the second preset area of the second screen, and then continuing to drag the position switching identifier by touching the position switching identifier, when a drag operation for the position switching identifier is detected, determining the stopping position and/or the stopping time of the position switching identifier in the second preset area according to the drag operation, and when the stopping time is greater than or equal to the preset time, taking an application corresponding to the stopping position corresponding to the stopping time greater than or equal to the preset time as the application to be shared, and further determining or generating a path according to the application to be shared.
Optionally, the second preset area is an area where a target interface displayed in the second screen is located, the target interface is used for displaying a display window of an application in the second screen, based on directly jumping the staying position to the target interface of the display window used for displaying the application in the second screen, the user continues to drag based on the jumping staying position, so as to determine an application to be shared of the target interface in the second screen; alternatively, the position switching indicator may be a floating ball, and may be a floating bubble, which is not limited herein. Optionally, the manner of displaying the display window of the application on the target interface of the second screen may be to screen a recommended application from at least one application in the second screen, and display the display window of the recommended application on the target interface of the second screen, and the manner of screening the recommended application may be to use a currently running application in the second screen as the recommended application, to use an application matching the target shared content as the recommended application, to screen an application matching the current business scenario from at least one application in the second screen based on the current business scenario, to use an application matching the current business scenario as the recommended application, and to use an application matching a source application corresponding to the target shared content as the recommended application.
Optionally, when a user needs to share content to be shared of the intelligent terminal to another terminal, after the user executes a second trigger action for the content to be shared or target shared content or a dragging identifier, when the stop position of the second trigger action is detected to be located in a third preset area of a display screen of the intelligent terminal, the target shared content is sent to the target terminal, and the target terminal is connected with the intelligent terminal to receive the target shared content sent by the intelligent terminal.
Optionally, before sending the target sharing content to the target terminal, detecting whether the second trigger action meets a second trigger condition, where the second trigger condition includes at least one of:
the second trigger action meets the preset trigger action;
the speed of a second dragging operation in the second trigger action is greater than or equal to a preset speed;
and the target sharing content corresponding to the second trigger action conforms to a preset data type, and the preset data type is related to the target terminal.
Optionally, in actual operation, versions of applications installed on different terminals are different, and when the versions are different, the target sharing content in the intelligent terminal cannot be sent to the application to be shared of the target terminal.
Optionally, before the target shared content is shared to the target terminal, the target terminal may further determine whether the second trigger action meets a second trigger condition, optionally, after responding to the second trigger action, send the data type of the target shared content corresponding to the second trigger action to the target terminal, so that the target terminal detects whether the data type meets a preset data type, send a verification success response message to the intelligent terminal when the target terminal detects that the data type meets the preset data type, and send the target shared content to the target terminal when the intelligent terminal receives the verification success response message.
Optionally, after receiving the target sharing content, the target terminal may send the target sharing content to an application to be shared in the target terminal, and may also share the target sharing content to the application to be shared in the target terminal through a pasting board or an intent, so as to complete sharing of data in applications of different terminals.
In the embodiment of the application, whether the stopping position of the second trigger action is located in a first preset area of a first screen is detected, if yes, the stopping position is directly switched to a second preset area of a second screen, so that a second dragging operation in the second trigger action is continuously executed based on the switched stopping position, the stopping position and/or the stopping time length of the second trigger action in the second preset area are obtained, a sharing path is determined or generated according to the stopping position and/or the stopping time length, a sharing action is executed according to the target sharing content and/or the sharing path, so that the target sharing content of a source application of the first screen is shared to an application to be shared of the second screen, the target sharing content is directly injected into an input window in the application to be shared of the second screen, the dragging of the target sharing content among different screens is realized, and the sharing of the target sharing content among different applications of different screens is realized; in addition, according to the embodiment of the application, the target sharing content is sent to the target terminal when the stop position is detected to be located in the third preset area, so that the content sharing among different terminals is realized.
Third embodiment
Referring to fig. 9, an embodiment of the present application further provides a sharing method, where the method is applicable to an intelligent terminal, and includes the following steps:
s210: determining or generating a sharing path based on the first trigger action;
s220: determining or generating at least one content to be shared according to the sharing path;
s230: determining or generating target sharing content according to a second trigger action on the content to be shared;
s240: and executing a sharing action according to the sharing path and the target sharing content.
In the embodiment of the application, the first trigger action is used for selecting an application to be shared, when the first trigger action is received, a sharing path is determined or generated based on the application to be shared selected by the first trigger action, the first trigger action may be a long press operation, the application corresponding to the position of the long press operation is used as the application corresponding to the long press operation, the application corresponding to the long press operation is used as the application to be shared, the sharing path is determined or generated according to the application to be shared, the first trigger action may also be an air-separating gesture, the application is selected according to the air-separating gesture, the selected application is used as the application to be shared, and the sharing path is determined or generated according to the application to be shared.
Optionally, after the sharing path is determined or generated, at least one content to be shared is determined or generated according to the sharing path, and it can be understood that in an actual operation, different applications receive different content, such as an album application, and the received content is a picture; such as a file editing application, the received content is a table, a document, etc.; for example, the received content is a work document, a work table and the like, so that the method for determining the content to be shared based on the sharing path is provided in the embodiment of the application, the risk of the content to be shared between the applications can be realized without actively selecting the content to be shared or dragging the content to be shared by a user, and the intelligence and the efficiency of data sharing are improved.
Alternatively, referring to fig. 10, the S220 includes:
s2201: determining or generating an application to be shared according to the sharing path;
s2202: acquiring at least one content matched with the attribute information according to the attribute information of the application to be shared;
s2203: determining at least one content matched with the attribute information as the content to be shared;
optionally, an application to be shared is determined or generated based on the sharing path, the attribute information of the application to be shared obtains at least one content matched with the attribute information, the attribute information includes a function attribute, the content matched with the function attribute is used as the content to be shared, the function attribute includes but is not limited to "tool class", "work class", "game class", "multimedia class", and the like, for example, when the application to be shared is a work class application, the function attribute is "work class", and a work document and a contract file can be used as the content matched with the "work class". Optionally, the attribute information may further include an industry attribute, where the industry attribute includes, but is not limited to, bank, securities, fund, insurance, trust, consumption finance, mobile, internet, telecommunication, institution, health, energy, education, transportation, newspaper media, new media, e-commerce, audio and video, news reading, life service, travel, social contact, camera shooting, and the like, and for example, when the industry attribute of the application to be shared is health, the medical record information may be used as content matched with the health care industry. Optionally, the attribute information may further include usage data of at least one application, where the usage data includes at least one of a usage duration, a usage time, a usage location, and a usage frequency, and exemplarily, an application currently running in a foreground is determined according to the usage data, an editing time of at least one content is determined, the editing time is matched with the current usage time of the application currently running in the foreground, so as to obtain a content whose time interval with the current usage time is less than or equal to a preset time interval, and the content is used as the content to be shared.
Optionally, after the at least one content matched with the attribute information is obtained, the at least one content matched with the attribute information is determined as the content to be shared, and then the content to be shared and the sharing path may be directly executed to perform a sharing action, so that the content to be shared is injected into an input window of the application to be shared.
Optionally, in order to improve the sharing accuracy, in the embodiment of the present application, at least one content to be shared is displayed in a sharing interface when the content to be shared is determined, so that a user can browse the content to be shared, and a target shared content is selected from the displayed at least one content to be shared.
Optionally, after the content to be shared is displayed, a second trigger action of the user on the content to be shared is received, where the second trigger action is used to indicate a sharing confirmation instruction for screening out a target shared content from at least one content to be shared or for the content to be shared, the sharing confirmation instruction is used to take the content to be shared as the target shared content, and the second trigger action includes, but is not limited to, a click action, a double click action, and a drag action; after the second trigger action is received, determining or generating target sharing content according to the second trigger action, and then executing a sharing action according to the sharing path and the target sharing content so as to share the target sharing content to the application to be shared, so that the target sharing content is injected into an input window of the application to be shared.
According to the method and the device for sharing the content, after a sharing path is determined or generated through a first trigger action of a receiving user, the application to be shared is determined based on the sharing path, the content to be shared matched with the application to be shared is selected according to attribute information of the application to be shared, after the content to be shared is determined, the content to be shared is displayed in a sharing interface so that the user can confirm target sharing content or screen out the target sharing content from at least one content to be shared, after the target sharing content is determined, a sharing action is executed according to the sharing path and the target sharing content, the target sharing content is injected into an input window of the application to be shared, the content to be shared is screened out for the user based on the attribute information of the application to be shared based on the embodiment of the application, the user does not need to actively select the content to be shared, the content is simplified, and the data sharing efficiency is improved.
The embodiment of the present application further provides an intelligent terminal, where the intelligent terminal includes a memory and a processor, and the memory stores a sharing program, and the sharing program is executed by the processor to implement the steps of the sharing method in any of the above embodiments.
An embodiment of the present application further provides a storage medium, where a sharing program is stored on the storage medium, and the sharing program, when executed by the processor, implements the steps of the sharing method in any of the above embodiments.
In the embodiments of the intelligent terminal and the storage medium provided in the present application, all technical features of any one of the embodiments of the sharing method may be included, and the expanding and explaining contents of the specification are basically the same as those of the embodiments of the method, and are not described herein again.
Embodiments of the present application also provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method in the above various possible embodiments.
Embodiments of the present application further provide a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method in the above various possible embodiments.
It is to be understood that the foregoing scenarios are only examples, and do not constitute a limitation on application scenarios of the technical solutions provided in the embodiments of the present application, and the technical solutions of the present application may also be applied to other scenarios. For example, as can be known by those skilled in the art, with the evolution of system architecture and the emergence of new service scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device in the embodiment of the application can be merged, divided and deleted according to actual needs.
In the present application, the same or similar term concepts, technical solutions and/or application scenario descriptions will be generally described only in detail at the first occurrence, and when the description is repeated later, the detailed description will not be repeated in general for brevity, and when understanding the technical solutions and the like of the present application, reference may be made to the related detailed description before the description for the same or similar term concepts, technical solutions and/or application scenario descriptions and the like which are not described in detail later.
In the present application, each embodiment is described with emphasis, and reference may be made to the description of other embodiments for parts that are not described or illustrated in any embodiment.
The technical features of the technical solution of the present application may be arbitrarily combined, and for brevity of description, all possible combinations of the technical features in the embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the scope of the present application should be considered as being described in the present application.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, a controlled terminal, or a network device) to execute the method of each embodiment of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a storage medium or transmitted from one storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy Disk, memory Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all the equivalent structures or equivalent processes that can be directly or indirectly applied to other related technical fields by using the contents of the specification and the drawings of the present application are also included in the scope of the present application.

Claims (10)

1. A sharing method, comprising the steps of:
s110: determining or generating at least one content to be shared based on the first trigger action and/or the system information;
s120: determining or generating target sharing content and/or a sharing path according to a second trigger action on the content to be shared;
s130: and executing a sharing action according to the sharing path and/or the target sharing content.
2. The method of claim 1, wherein the S110 step comprises at least one of:
detecting a content dragging operation and/or a content selecting operation aiming at a source application, and determining or generating the content to be shared according to the content dragging operation and/or the content selecting operation;
determining current time according to the system information, determining a target file of which the time interval with the current time is smaller than a preset time interval according to the editing time of at least one file, and determining or generating the content to be shared according to the target file;
determining current positioning information and current time according to the system information, and determining or generating the content to be shared according to the current positioning information and the current time;
determining a current service scene according to the system information, acquiring files meeting the current service scene, and determining or generating the content to be shared according to the files meeting the current service scene.
3. The method of claim 1, wherein the S120 step comprises:
determining or generating target sharing content, a stopping position and/or stopping duration corresponding to the stopping position according to the second trigger action;
and determining or generating the sharing path according to the stop position and/or the stop time.
4. The method of claim 3, comprising at least one of:
the step of determining or generating the sharing path according to the stopping position and/or the stopping time comprises: when the stay time length is greater than or equal to a preset time length, determining or generating an application to be shared corresponding to the stay position, and determining or generating the sharing path according to the application to be shared;
the method further comprises the following steps: when the staying position corresponding to the second trigger action is detected to be located in a first preset area of a first screen, the staying position is switched to a second preset area of a second screen, the staying position and/or the staying duration of the second trigger action in the second preset area are/is obtained, and the sharing path is determined or generated according to the staying position and/or the staying duration;
the method further comprises the following steps: and when the stop position corresponding to the second trigger action is detected to be located in a third preset area of the display screen of the intelligent terminal, the target sharing content is sent to the target terminal.
5. The method of any of claims 1 to 4, wherein the manner in which the shared path is determined or generated comprises at least one of:
acquiring a current business scene, determining or generating an application to be shared according to the current business scene, and determining or generating the sharing path according to the application to be shared;
obtaining historical sharing data, determining or generating an application to be shared according to the historical sharing data, and determining or generating the sharing path according to the application to be shared.
6. The method according to any of claims 1 to 4, wherein the S130 step comprises at least one of:
sending the target sharing content to an application to be shared corresponding to the sharing path;
converting the target sharing content into target sharing content in a preset format, and sending the target sharing content in the preset format to an application to be shared corresponding to the sharing path;
and encapsulating the target sharing content in an intent object, and calling an activity interface of the application to be shared so as to transmit the intent object to the application to be shared through the activity interface.
7. A sharing method, comprising the steps of:
s210: determining or generating a sharing path based on the first trigger action;
s220: determining or generating at least one content to be shared according to the sharing path;
s230: determining or generating target sharing content according to a second trigger action on the content to be shared;
s240: and executing a sharing action according to the sharing path and the target sharing content.
8. The method of claim 7, wherein the S220 step comprises:
determining or generating an application to be shared according to the sharing path;
acquiring at least one content matched with the attribute information according to the attribute information of the application to be shared;
and determining at least one content matched with the attribute information as the content to be shared.
9. An intelligent terminal, characterized in that, intelligent terminal includes: memory, processor, wherein the memory has stored thereon a sharing program, which when executed by the processor implements the steps of the sharing method according to any one of claims 1 to 8.
10. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the sharing method according to any one of claims 1 to 8.
CN202211185979.5A 2022-09-27 2022-09-27 Sharing method, intelligent terminal and storage medium Pending CN115696306A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211185979.5A CN115696306A (en) 2022-09-27 2022-09-27 Sharing method, intelligent terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211185979.5A CN115696306A (en) 2022-09-27 2022-09-27 Sharing method, intelligent terminal and storage medium

Publications (1)

Publication Number Publication Date
CN115696306A true CN115696306A (en) 2023-02-03

Family

ID=85064371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211185979.5A Pending CN115696306A (en) 2022-09-27 2022-09-27 Sharing method, intelligent terminal and storage medium

Country Status (1)

Country Link
CN (1) CN115696306A (en)

Similar Documents

Publication Publication Date Title
CN114327189B (en) Operation method, intelligent terminal and storage medium
CN114371803B (en) Operation method, intelligent terminal and storage medium
CN113268298A (en) Application display method, mobile terminal and readable storage medium
CN114398113A (en) Interface display method, intelligent terminal and storage medium
CN113900560A (en) Icon processing method, intelligent terminal and storage medium
CN113325981A (en) Processing method, mobile terminal and storage medium
CN114860674B (en) File processing method, intelligent terminal and storage medium
CN114594886B (en) Operation method, intelligent terminal and storage medium
CN114363318A (en) Data processing method, intelligent terminal and storage medium
CN114510166A (en) Operation method, intelligent terminal and storage medium
CN114138144A (en) Control method, intelligent terminal and storage medium
CN114006881A (en) Message processing method, intelligent terminal and storage medium
CN113867588A (en) Icon processing method, intelligent terminal and storage medium
CN115696306A (en) Sharing method, intelligent terminal and storage medium
CN113381924A (en) Processing method, mobile terminal and storage medium
CN117043730A (en) Processing method, mobile terminal and storage medium
CN107562317B (en) Display method and terminal
CN115826822A (en) Screen capturing method, intelligent terminal and storage medium
CN114125733A (en) File management method, terminal and storage medium
CN114879873A (en) Application program control method, intelligent terminal and storage medium
CN115904598A (en) Data processing method, intelligent terminal and storage medium
CN114020702A (en) Gallery management method, intelligent terminal and storage medium
CN115185597A (en) Processing method, intelligent terminal and storage medium
CN114722009A (en) Folder processing method, intelligent terminal and storage medium
CN117193596A (en) Display method, intelligent terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication