CN112416230B - Object processing method and device - Google Patents

Object processing method and device Download PDF

Info

Publication number
CN112416230B
CN112416230B CN202011348838.1A CN202011348838A CN112416230B CN 112416230 B CN112416230 B CN 112416230B CN 202011348838 A CN202011348838 A CN 202011348838A CN 112416230 B CN112416230 B CN 112416230B
Authority
CN
China
Prior art keywords
input
target
input interface
screen
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011348838.1A
Other languages
Chinese (zh)
Other versions
CN112416230A (en
Inventor
谢能显
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011348838.1A priority Critical patent/CN112416230B/en
Publication of CN112416230A publication Critical patent/CN112416230A/en
Application granted granted Critical
Publication of CN112416230B publication Critical patent/CN112416230B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/543User-generated data transfer, e.g. clipboards, dynamic data exchange [DDE], object linking and embedding [OLE]

Abstract

The application discloses an object processing method, an object processing device and electronic equipment, which belong to the technical field of communication, wherein the method is applied to the electronic equipment comprising a first screen and a second screen, and comprises the following steps: receiving a first input to a first object within a first interface of the first screen display; responding to the first input, temporarily storing the first object and displaying a first object identifier corresponding to the first object in the second screen; under the condition that a first interface in the first screen is switched to a target input interface, acquiring an object input interface matched with the first object type in the target input interface; adding the first object to the object input interface. According to the object processing method, a user only needs to execute the first input to trigger the system to add the first object to the target input interface from the current interface, complex interface switching operation does not need to be executed, and operation is convenient and fast.

Description

Object processing method and device
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to an object processing method and device.
Background
With the rapid development of the mobile internet and the dependence of young people pursuing fashion on electronic devices, the electronic devices are more and more popular. The user can not only make and call at any time and any place through the electronic equipment, but also install various different types of application programs on the electronic equipment, and the installed various different types of application programs enable life and work of the user to be more convenient.
During the use of the electronic device, users often have a need to copy text, pictures, files, text messages, or the like from the interface a of the first application to the interface B of the second application. A common way currently used in addressing the above needs consists of the following steps: the first step is as follows: the method comprises the steps that a system is triggered to store a target object through a first input in an interface A; the second step is that: executing a second input to trigger the system to exit the first application program; the third step: executing a third input to trigger the system to start a second application program; the fourth step: executing a fourth input to trigger a system to open an interface B; the fifth step: and executing a fifth input in the B interface to trigger the system to copy or add the target object to the target position in the B interface. It can be seen that, in the whole operation process, the user needs to perform not only the related input of saving and copying the target object, but also the related input of switching between the application programs, and the operation process is complicated.
Disclosure of Invention
The embodiment of the application aims to provide an object processing method, which can solve the problem that the existing method for copying a target object across application programs is complicated in operation process.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an object processing method, where the object processing method is applied to an electronic device including a first screen and a second screen, where the method includes: receiving a first input to a first object within a first interface of the first screen display; responding to the first input, temporarily storing the first object and displaying a first object identifier corresponding to the first object in the second screen; under the condition that the first interface in the first screen is switched to a target input interface, acquiring an object input interface matched with the first object type in the target input interface; adding the first object to the object input interface.
In a second aspect, an embodiment of the present application provides an image processing apparatus applied to an electronic device including a first screen and a second screen, where the apparatus includes: the first receiving module is used for receiving a first input of a first object in a first interface displayed by the first screen; the display module is used for responding to the first input, temporarily storing the first object and displaying a first object identifier corresponding to the first object in the second screen; the acquisition module is used for acquiring an object input interface matched with the first object type in the target input interface under the condition that the first interface in the first screen is switched to the target input interface; an adding module for adding the first object to the object input interface.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, a first input of a first object in a first interface displayed on a first screen is received; responding to a first input, temporarily storing a first object and displaying a first object identifier corresponding to the first object in a second screen; under the condition that the first interface in the first screen is switched to a target input interface, acquiring an object input interface matched with a first object type in the target input interface; the first object is added into the object input interface, a user can trigger the system to add the first object into the target input interface from the current interface only by executing the first input, complex interface switching operation does not need to be executed, and operation is convenient and fast.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a flow chart illustrating the steps of an object processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a dual screen first interface according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a dual screen second interface according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a dual-screen third interface according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a fourth interface of a dual screen embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a fifth interface of a dual screen embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a sixth interface of a dual screen embodiment of the present application;
fig. 8 is a block diagram showing a structure of an object processing apparatus according to an embodiment of the present application;
fig. 9 is a block diagram showing a configuration of an electronic apparatus according to an embodiment of the present application;
fig. 10 is a schematic diagram showing a hardware configuration of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The object processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, a flowchart illustrating steps of an object processing method according to an embodiment of the present application is shown.
The object processing method of the embodiment of the application comprises the following steps:
step 101: a first input is received for a first object within a first interface of a first screen display.
The object processing method provided by the embodiment of the application is applied to the electronic equipment comprising the first screen and the second screen. The first screen and the second screen can be two independent display areas on the flexible screen or two independent screens.
The first object may be one or more, and the first object may be any type of reusable data such as text information, a file, an image, or a video. The first input is used for triggering the system to temporarily store the first object, and the first input can be long-time pressing, clicking or sliding operation on the first object and the like. In the case where the first object is plural, the first input may be performed to the plural first objects sequentially, or may be performed to the plural first objects simultaneously.
Step 102: and responding to the first input, temporarily storing the first object and displaying a first object identifier corresponding to the first object in a second screen.
The first object identifier may be the first object itself or an identifier of the first object. For example: if the first object is a short section of text information, the first object identifier can be set as the short section of text information; for example: if the first object is a file, the first object identifier can be set as the identifier of the file; for another example: the first object is an image, the first object identification may be set as a thumbnail of the image, etc.
Ways to temporarily store the first object may include, but are not limited to: local temporary storage and cloud temporary storage.
The mode of temporarily storing the first object can be indicated by the first input of the user, or the temporary storage mode can be defaulted by the system, and the user can switch the temporary storage mode defaulted by the system.
Fig. 2 is a schematic diagram of a dual-screen first interface, and as shown in fig. 2, after a user a performs a first input on all contents sent to a user B in a current interface as first objects in a first screen 201, a system temporarily stores each first object in a local temporary storage area. In an actual operation process, a user can move a first object to a local temporary storage area of the second screen 202 through a first input, and a trigger system temporarily stores the first object in the local temporary storage area.
In the case that the default temporary storage mode of the system is local temporary storage, the user may move the first object to any position of the second screen 202 through the first input, and trigger the system to temporarily store the first object in the local temporary storage area. The user may move the first object to the "cloud buffer" of the second screen 202 through the first input, and trigger the system to temporarily store the first object in the cloud buffer.
It should be noted that, the above list is merely exemplary to list several possible schemes for selecting the temporary storage mode, and the practical implementation process is not limited to this, and the specific scheme for selecting the temporary storage mode can be flexibly set by a person skilled in the art.
Step 103: and under the condition that the first interface in the first screen is switched to the target input interface, acquiring an object input interface matched with the first object type in the target input interface.
After the first objects are temporarily stored and the first object identifiers corresponding to the first objects are displayed in the second screen, the user can manually multiplex the first objects into the target input interface, and the system can automatically multiplex the first objects into the target input interface. The first object displayed in the second screen can be multiplexed into different interfaces for multiple times, and the user can switch the target input interface in the first screen.
The target input interface comprises one or more object input interfaces, and the system can directly add the first object to the object input interfaces under the condition that the first object is single, the target input interface only comprises one object input interface and is matched with the first object type.
When the target input interface comprises a plurality of object input interfaces or only one object input interface but a plurality of first objects matched with the types of the object input interfaces exist, the corresponding relation between the object input interfaces and the first objects can be manually set by a user in advance, and the system acquires the first objects corresponding to the object input interfaces in the target input interface according to the corresponding relation set by the user.
Step 104: a first object is added to the object input interface.
For example: the target input interface comprises a text information input interface, only one piece of text information is displayed in the second screen, and the system can directly add the piece of text information in the second screen into the text information input interface of the target input interface.
For the object input interface and the first object, in the case of many-to-one, one-to-many, or many-to-many, the specific manner of adding the first object to the object input interface may be flexibly set by a person skilled in the art, and this is not specifically limited in the embodiment of the present application.
The object processing method provided in the embodiment of the application receives a first input of a first object in a first interface displayed on a first screen; responding to a first input, temporarily storing a first object and displaying a first object identifier corresponding to the first object in a second screen; under the condition that a first interface in a first screen is switched to a target input interface, acquiring an object input interface matched with a first object type in the target input interface; the first object is added into the object input interface, a user can trigger the system to add the first object into the target input interface from the current interface only by executing the first input, complex interface switching operation does not need to be executed, and operation is convenient and fast.
In an alternative embodiment, the step of temporarily storing the first object and displaying the first object identifier corresponding to the first object in the second screen in response to the first input comprises the following sub-steps:
the first substep: in response to a first input, determining a target temporary storage mode of a first object;
wherein, the target temporary storage mode includes: local temporary storage or cloud temporary storage;
and a second substep: under the condition that the target temporary storage mode is cloud temporary storage and the first object type is an image or a file, storing the first object to the cloud and locally temporarily storing a Uniform Resource Locator (URL) corresponding to the first object;
and a third substep: displaying a first object identifier corresponding to the first object in a second screen;
the first object identifier may be the first object itself or an identifier of the first object. For example: if the first object is a short section of text information, the first object identifier can be set as the short section of text information; for example: if the first object is a file, the first object identifier can be set as the identifier of the file; for another example: the first object is an image, the first object identification may be set as a thumbnail of the image, etc.
Fig. 3 is a schematic diagram of a dual-screen second interface when the first objects are temporarily stored in the cloud, and it can be known through display in the second screen in fig. 3 that each first object is currently stored in the cloud temporary storage area, and which first objects are specifically included and the type of each first object can be known through the first object identifier displayed in the second screen.
And a fourth substep: and under the condition that the target temporary storage mode is cloud temporary storage and the first object type is a character, storing the first object to the cloud, and displaying a first object identifier corresponding to the first object in a second screen.
In the case where the first object is text information, the first object identification displayed in the second screen may be the text information itself.
And if the text information displayed in the second screen is multiplexed into the target input interface, downloading the corresponding text information from the cloud end by the system and filling the corresponding text information into the text information input interface in the target input interface.
If the image corresponding to the image identifier in the second screen is multiplexed into the target input interface or the file corresponding to the file identifier is multiplexed into the target input interface, the system directly connects the URL of the image or the file with the input interface with the matched type in the target input interface, and the server can directly transmit the corresponding image data or the corresponding file data to the corresponding input interface without transferring the image data or the file data by the electronic equipment.
When the target temporary storage mode is local temporary storage, the first object of any type stores the corresponding data to the local electronic equipment.
This kind of optionally high in the clouds temporary storage mode can save the local memory space of electronic equipment to when follow-up first object is multiplexing, need not to download file or image to electronic equipment and carry out the transfer locally, can effectively alleviate electronic equipment's transfer pressure.
In an alternative embodiment, in the case that the object input interface is plural, the step of adding the first object to the object input interface comprises the sub-steps of:
the first substep: respectively displaying first identifications corresponding to the object input interfaces in a first screen;
fig. 4 is a schematic diagram of a double-screen third interface, where the target input interface displayed on the first screen in fig. 4 includes three text object input interfaces, and first identifiers corresponding to the three text object input interfaces are "input box a", "input box B", and "input box C", respectively.
And a second substep: receiving a second input identifying the first object in the second screen;
the second input may be a click operation or a long press operation, etc. of the first object identifier.
And a third substep: displaying the identification list in a second screen in response to a second input;
the identification list comprises second identifications corresponding to the object input interfaces matched with the first object types, wherein the first identifications comprise the second identifications.
As shown in fig. 4, the user performs a click 1 operation on the text information identifier "age 20", and the system displays a second identifier of each text object input interface matching the selected text information identifier type in the target input interface in response to the click 1 operation.
And a fourth substep: receiving a third input of the target identifier in each identifier list;
the third input may be a click operation or a long press operation of a target identifier in the list of identifiers displayed in the first screen, or the like.
And a fifth substep: and responding to the third input, and adding the first object to the object input interface corresponding to the target identification in the target input interface.
Optionally, adding the first object to the object input interface corresponding to the target identifier in the target input interface substantially establishes interfacing between the data of the first object and the object input interface, so as to transmit the data of the first object through the interface. And what is presented in the first screen is that the first object identification corresponding to the first object is added in the object input interface.
As shown in fig. 4, after the user performs the click 2 operation on the "input box B", i.e. the target identifier, in the second screen, the system automatically adds the text information identifier of "20 years old" to the object input interface corresponding to the "input box B" in the target input interface, and the system transmits the text information corresponding to the text information identifier of "20 years old" through the interface.
And the substeps two to five are processes of adding a first object to an object input interface in the target input interface, and in the actual implementation process, a user can repeat the processes to trigger the system to add the corresponding first object to each object input interface in the target input interface.
The method for optionally manually establishing the corresponding relationship between the object input interface and the first object by the user can ensure the adding accuracy of the first object.
In an alternative embodiment, the step of adding the first object to the object input interface in case the object input interface is single and the first object is multiple, comprises the sub-steps of:
the first substep: receiving a fourth input to an object input interface in the target input interface;
in this optional embodiment, the object input interface is a single one including: the target input interface only comprises a single object input interface, or the user input interface comprises a plurality of object input interfaces, but the user only selects one object input interface.
In the actual implementation process, the purpose of inputting the object input interface in the target input interface can be achieved by performing input operation on the identifier corresponding to the object input interface in the target input interface. The fourth input includes at least one of: and clicking or long-time pressing the identifier corresponding to the object input interface.
Fig. 5 is a schematic diagram of a dual-screen fourth interface, where the target input interface shown in the first screen in fig. 5 includes three text object input interfaces, and the user only performs a click 1 operation on the text object input interface of "name", so as to achieve the purpose of selecting a single text object input interface from multiple text object input interfaces.
And a second substep: receiving a fifth input of at least two target object identifications in the second screen;
the first object identification comprises target object identifications, and each target object identification corresponds to one target object.
The fifth input may include at least one of: and the fifth input is used for selecting the target object identifier from the plurality of first object identifiers in the second screen and setting the adding sequence of the target object corresponding to the target object identifier to the object input interface.
And a third substep: and responding to the fourth input and the fifth input, and sequentially adding each target object to the object input interface in the first screen.
As shown in the second screen interface in fig. 5, the user sequentially performs a click 2 operation on "my name call a" and a click 3 operation on "age is 20 years". The system automatically and successively adds My name A and the age of 20 years to a text object input interface of 'name'.
And the substeps one to three are processes of adding a plurality of target objects into the same object input interface, and in the actual implementation process, a user can repeat the processes to add a plurality of target objects into each object input interface in the target input interface.
By the optional mode, the user can flexibly select and combine the target objects added into the object input interface, and the personalized requirements of the user can be met.
In an alternative embodiment, the step of adding the first object to the object input interface in case the object input interface is plural and the first object is plural, comprises the sub-steps of:
the first substep: sequentially receiving sixth input of N target object input interfaces in the target input interface;
wherein N is greater than or equal to 2; the sixth input may be a single-click operation, a double-click operation, a long-press operation, or the like on the target object input interface. And the sixth input is used for selecting N target object input interfaces from the target input interfaces and setting the sequence of the N target object input interfaces.
And a second substep: responding to a sixth input, and displaying the second identification of each target object input interface in the target input interface;
and after receiving a sixth input to the target object input interface, the system correspondingly displays the second identifier of the target object input interface at a preset position.
When the second identifier of each target object input interface is displayed according to the sixth input, after the execution of the sixth input is completed, the second identifiers of each target object input interface on which the sixth input is executed may be displayed at the same time; it is also possible that during execution of the sixth input, each time a target object input interface is selected, its corresponding second identification is immediately displayed.
Fig. 6 is a schematic diagram of a double-screen fifth interface, the target input interface in fig. 6 includes three text object input interfaces, the user sequentially performs a click 1 operation on the text object input interface of "name", performs a click 2 operation on the text object input interface of "age", and performs a click 3 operation on the text object input interface of "address", and the system respectively displays corresponding second identifiers at the end portions of three text information boxes, that is, the text object input interfaces. Wherein, the second identifier corresponding to the text object input interface of "name" is "input box A", the second identifier corresponding to the text object input interface of "age" is "input box B", and the second identifier corresponding to the text object input interface of "address" is "input box C".
And a third substep: receiving a seventh input to the N target objects in the second screen in sequence;
the seventh input is used for selecting N target object identifications from the first object identifications displayed in the second screen and setting text object input interfaces corresponding to the N target object identifications respectively.
And a fourth substep: responding to a seventh input, and respectively displaying second identifications associated with the N target object identifications in a second screen according to the sequence that the N target object identifications receive the seventh input and the sequence that the N target object input interfaces receive a sixth input;
for example: and the target object identifiers of the executed input in the seventh input are A, B and C in sequence, and the second identifiers of the target object input interfaces of the executed input in the sixth input are D, E and F in sequence, so that the association relationship between A and D, the association relationship between B and E and the association relationship between C and F are established.
When the second identifiers associated with the target object identifiers are displayed according to the seventh input, after the seventh input is executed, the second identifiers associated with the target object identifiers, on which the seventh input is executed, may be displayed at the same time; it is also possible that during the execution of the seventh input, the associated second identifier is displayed immediately upon selection of a target object identifier.
And a fifth substep: and respectively adding the N objects corresponding to the N target object identifications to the target object input interfaces corresponding to the associated second identifications.
As shown in fig. 6, the user sequentially clicks three text object input interfaces of "name", "age", and "address" in the first screen, and sequentially clicks three text information identifiers of "my name and call a", "age is 20 years" and "live at mark street number 8" in the second screen, then the system automatically adds "my name and call a" to the "name" text object input interface, adds "age is 20 years" to the "age" text object input interface, and adds "live at mark street number 8" to the "address" text object input interface.
The mode of optionally adding the associated target object to the object input interface can realize the simultaneous input of multiple interfaces and multiple objects, and is convenient and fast to operate.
In an alternative embodiment, where the first object is of a plurality of types and the object input interface is of a plurality of types, the manner in which the first object is added to the object input interface is as follows:
searching a first number of object input interfaces of each object input interface type in the target input interface aiming at each object input interface type;
in the case that the first number is 1, determining a second number of target objects matched with the type in the first objects;
and in the case that the second number is 1, adding the target object to the object input interface with the matched type in the target input interface.
In case the second number is larger than 1 or the first number is larger than 1, automatic addition of the first object to the object input interface of the type is prohibited.
The mode of optionally adding the first object to the object input interface is more intelligent without the user manually performing the related operation of adding the first object under the condition that a specific condition is met.
In an alternative embodiment, in the case where the first object is of multiple types and the object input interface is of multiple types, the manner of adding the first object to the object input interface is as follows:
searching a first number of object input interfaces of the object input interface type in a target input interface aiming at each object input interface type;
and in the case that the first number is 1 and ninth input of a target object matched with the type in the first object is received, adding the target object into the object input interface of the type.
Fig. 7 is a schematic diagram of a dual-screen sixth interface, where the target input interface shown in the first screen interface in fig. 7 includes three text object input interfaces, an image input interface, and a file input interface. The user may perform a ninth input, e.g., a click operation, on "Picture 1" shown in the second screen interface, and the system then automatically adds Picture 1 to the image input interface displayed in the first screen interface. The user may perform a ninth input, such as a click operation, on the "personal information" file shown in the second screen interface, and the system then automatically adds the "personal information" file to the file input interface displayed in the first screen interface.
According to the mode of optionally adding the first object to the object input interface, when only one object input interface of a certain type is available in the target input interface, a user only needs to select the target object with the matched type from the second screen, the target object input interface does not need to be manually selected, and the operation is convenient and fast and intelligent.
In an optional embodiment, after the first object is added to the object input interface, the following steps may be further included:
the method comprises the following steps: receiving an eighth input identifying the first object in the second screen;
the eighth input includes at least one of: a long press operation, a tap operation, a swipe operation, or performing a particular gesture on the first object. The particular gesture may be a shape that is repeated 8 on the first object, or a cross is made on the first object, etc. The eighth input is used to trigger the system to clear the first object.
Step two: and in response to the eighth input, clearing the first object corresponding to the temporarily stored first object identifier, and canceling the display of the first object identifier in the second screen.
After the first object which is temporarily stored is cleared, the storage space occupied by the first object can be released.
The first object processing method provided in this optional embodiment may effectively manage the temporarily stored first object, immediately release the storage space occupied by the first object, and improve the effective utilization rate of the storage space.
In the object processing method provided in the embodiment of the present application, the execution main body may be an object processing apparatus, or a control module for executing the object processing method in the object processing apparatus. In the embodiment of the present application, an object processing apparatus executing an object processing method is taken as an example, and the object processing apparatus provided in the embodiment of the present application is described.
Fig. 8 is a block diagram of an object processing apparatus for implementing an embodiment of the present application.
The object processing apparatus 300 of the embodiment of the present application is applied to an electronic device including a first screen and a second screen, wherein the apparatus 300 includes:
a first receiving module 301, configured to receive a first input to a first object in a first interface of the first screen display;
a display module 302, configured to, in response to the first input, temporarily store the first object and display a first object identifier corresponding to the first object in the second screen;
an obtaining module 303, configured to obtain an object input interface, which is of a type that matches the first object, in a target input interface when the first interface in the first screen is switched to the target input interface;
an adding module 304, configured to add the first object to the object input interface.
Optionally, the display module includes:
a first sub-module, configured to determine a target staging mode of the first object in response to the first input, where the target staging mode includes: local temporary storage or cloud temporary storage;
the second sub-module is used for storing the first object to the cloud and temporarily storing the uniform resource locator corresponding to the first object locally under the condition that the target temporary storage mode is cloud temporary storage and the type of the first object is an image or a file;
the third sub-module is used for displaying a first object identifier corresponding to the first object in the second screen;
and the fourth sub-module is used for storing the first object to the cloud under the condition that the target temporary storage mode is cloud temporary storage and the type of the first object is characters, and displaying a first object identifier corresponding to the first object in the second screen.
Optionally, when there are a plurality of object input interfaces, the adding module includes:
the first display submodule is used for displaying a first identifier corresponding to each object input interface in the target input interface;
a first receiving submodule for receiving a second input of the first object identifier in the second screen;
a second display sub-module, configured to display, in response to the second input, an identifier list in the second screen, where the identifier list includes second identifiers corresponding to object input interfaces matching the first object type, and the first identifiers include the second identifiers;
the second receiving submodule is used for receiving third input of the target identification in the identification list;
and the first adding submodule is used for responding to the third input and adding the first object to an object input interface corresponding to the target identification in the target input interface.
Optionally, in a case that the object input interface is single and the first object is multiple, the adding module includes:
the third receiving submodule is used for receiving a fourth input of the object input interface in the target input interface;
a fourth receiving sub-module, configured to receive a fifth input of at least two target object identifiers in the second screen, where the first object identifier includes the target object identifiers, and each target object identifier corresponds to one target object;
and the second adding submodule is used for responding to the fourth input and the fifth input and sequentially adding the at least two target objects into the object input interface in the target input interface.
Optionally, when the number of the object input interfaces is multiple and the number of the first objects is multiple, the adding module includes:
the fifth receiving submodule is used for sequentially receiving sixth input of N target object input interfaces in the target input interface, wherein N is more than or equal to 2;
a third display submodule, configured to display, in response to the sixth input, a second identifier of each target object input interface in the target input interface;
a sixth receiving submodule, configured to sequentially receive a seventh input of N target object identifiers in the second screen, where the first object identifier includes the N target object identifiers;
a fourth display sub-module, configured to, in response to the seventh input, respectively display, in the second screen, the second identifiers associated with the N target object identifiers according to a sequence in which the N target object identifiers receive the seventh input and a sequence in which the N target object input interfaces receive the sixth input;
and the association submodule is used for respectively adding N objects corresponding to the N target object identifications to the target object input interface corresponding to the associated second identification.
Optionally, when the first object is of multiple types and the object input interface is of multiple types, the adding module includes:
the searching submodule is used for searching the first number of the object input interfaces of each object input interface type in the target input interface aiming at each object input interface type;
a quantity determination submodule, configured to determine, if the first quantity is 1, a second quantity of target objects, which are in the first object and match the type;
and the third adding submodule is used for adding the target object to the object input interface with the matched type in the target input interface under the condition that the second quantity is 1.
Optionally, the apparatus further comprises:
a second receiving module, configured to receive an eighth input of the first object identifier in the second screen after the adding module adds the first object to the object input interface;
and the clearing module is used for responding to the eighth input, clearing the first object corresponding to the temporarily stored first object identifier and canceling the display of the first object identifier in the second screen.
The object processing device provided by the embodiment of the application receives a first input of a first object in a first interface displayed on a first screen; responding to a first input, temporarily storing a first object and displaying a first object identifier corresponding to the first object in a second screen; under the condition that a first interface in a first screen is switched to a target input interface, acquiring an object input interface matched with a first object type in the target input interface; the first object is added into the object input interface, a user can trigger the system to add the first object into the target input interface from the current interface only by executing the first input, complex interface switching operation does not need to be executed, and operation is convenient and fast.
The object processing apparatus shown in fig. 8 in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The information prompting device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The object processing apparatus provided in embodiment fig. 8 of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 7, and is not described herein again to avoid repetition.
Optionally, referring to fig. 9, an electronic device 400 is further provided in this embodiment of the present application, and includes a processor 401, a memory 402, and a program or an instruction stored in the memory 402 and executable on the processor 401, where the program or the instruction is executed by the processor 401 to implement each process of the object processing method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and the like. The display unit 506 includes a first screen and a second screen.
Those skilled in the art will appreciate that the electronic device 500 may further include a power supply (e.g., a battery) for supplying power to various components, and the power supply may be logically connected to the processor 510 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
Wherein, the user input unit 507 is configured to receive a first input to a first object within a first interface of the first screen display;
the processor 510 is configured to, in response to the first input, temporarily store the first object, and invoke the display unit 506 to display a first object identifier corresponding to the first object in the second screen; under the condition that the first interface in the first screen is switched to a target input interface, acquiring an object input interface matched with the first object type in the target input interface; adding the first object to the object input interface.
Optionally, the processor 510 is further configured to determine a target temporary storage mode of the first object in response to the first input, where the target temporary storage mode includes: local temporary storage or cloud temporary storage; when the target temporary storage mode is cloud temporary storage and the first object type is an image or a file, calling the memory 509 to store the first object to the cloud and temporarily store the uniform resource locator corresponding to the first object locally;
a display unit 506, specifically configured to display a first object identifier corresponding to the first object in the second screen;
the processor 510 is further configured to, when the target temporary storage mode is cloud temporary storage and the first object type is a text, store the first object to the cloud, and call the display unit 506 to display a first object identifier corresponding to the first object in the second screen.
Optionally, when the number of the object input interfaces is multiple, the display unit 506 is configured to display a first identifier corresponding to each object input interface in the target input interface;
a user input unit 507 for receiving a second input for the first object identification in the second screen;
a display unit 506, configured to display, in response to the second input, an identifier list in the second screen, where the identifier list includes second identifiers corresponding to object input interfaces matching the first object type, and the first identifiers include the second identifiers;
a user input unit 507, further configured to receive a third input of a target identifier in the identifier list;
a processor 510, configured to add the first object to an object input interface corresponding to the target identifier in the target input interface in response to the third input.
Optionally, in the case that the object input interface is single and the first object is multiple,
a user input unit 507, configured to receive a fourth input to the object input interface in the target input interface; receiving a fifth input to at least two target objects in the second screen, wherein the first object identifier contains the target object identifiers, and each target object identifier corresponds to one target object;
a processor 510, configured to sequentially add the at least two target objects to the object input interfaces in the target input interface in response to the fourth input and the fifth input.
Optionally, in the case that the object input interface is multiple and the first object is multiple,
a user input unit 507, configured to sequentially receive sixth inputs to N target object input interfaces in the target input interface, where N is greater than or equal to 2;
a display unit 506, configured to display, in response to the sixth input, a second identifier of each target object input interface in the target input interface;
a user input unit 507, further configured to sequentially receive a seventh input of N target object identifiers in the second screen, where the first object identifier includes the N object identifiers;
a display unit 506, further configured to, in response to the seventh input, respectively display the second identifiers associated with the N target object identifiers in the second screen according to a sequence in which the N target object identifiers receive the seventh input and a sequence in which the N target object input interfaces receive the sixth input;
a processor 510, configured to add N objects corresponding to the N target object identifiers to the associated target object input interfaces corresponding to the second identifier, respectively.
Optionally, in a case that the first object is of multiple types and the object input interfaces are of multiple types, the processor 510 is configured to, for each object input interface type, find a first number of object input interfaces of each object input interface type in the target input interface; determining a second number of target objects in the first objects which are matched with the type under the condition that the first number is 1; and adding the target object to the object input interface with the matched type in the target input interface under the condition that the second number is 1.
Optionally, the user input unit 507 is further configured to receive an eighth input for identifying the first object in the second screen after the processor 510 adds the first object to the object input interface;
the processor 510 is further configured to clear the first object corresponding to the temporarily stored first object identifier and cancel displaying the first object identifier in the second screen in response to the eighth input.
According to the electronic equipment provided by the embodiment of the application, first input of a first object in a first interface displayed on a first screen is received; responding to a first input, temporarily storing a first object and displaying a first object identifier corresponding to the first object in a second screen; under the condition that a first interface in a first screen is switched to a target input interface, acquiring an object input interface matched with a first object type in the target input interface; the first object is added into the object input interface, a user can trigger the system to add the first object into the target input interface from the current interface only by executing the first input, complex interface switching operation does not need to be executed, and operation is convenient and fast.
It should be understood that in the embodiment of the present application, the input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 506 may include a display panel 5061, and the display panel 5061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 507 includes a touch panel 5071 and other input devices 5072. A touch panel 5071, also referred to as a touch screen. The touch panel 5071 may include two parts of a touch detection device and a touch controller. Other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in further detail herein. The memory 509 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems. Processor 510 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above object processing method embodiments, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the object processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. An object processing method applied to an electronic device comprising a first screen and a second screen, the method comprising:
receiving a first input to a first object within a first interface of the first screen display;
responding to the first input, temporarily storing the first object and displaying a first object identifier corresponding to the first object in the second screen;
under the condition that the first interface in the first screen is switched to a target input interface, acquiring an object input interface matched with the first object type in the target input interface;
adding the first object to the object input interface;
in a case where the object input interface is plural, the adding the first object to the object input interface includes:
displaying a first identifier corresponding to each object input interface in the target input interface;
receiving a second input identifying the first object in the second screen;
responding to the second input, displaying an identification list in the second screen, wherein the identification list comprises second identifications corresponding to object input interfaces matched with the first object type, and the first identifications comprise the second identifications;
receiving a third input of a target identifier in the identifier list;
in response to the third input, adding the first object to an object input interface corresponding to the target identification in the target input interface.
2. The method of claim 1, wherein the step of temporarily storing the first object and displaying a first object identifier corresponding to the first object in the second screen in response to the first input comprises:
in response to the first input, determining a target staging mode for the first object, wherein the target staging mode comprises: local temporary storage or cloud temporary storage;
under the condition that the target temporary storage mode is cloud temporary storage and the type of a first object is an image or a file, storing the first object to the cloud and locally temporarily storing a uniform resource locator corresponding to the first object;
displaying a first object identifier corresponding to the first object in the second screen;
and under the condition that the target temporary storage mode is cloud temporary storage and the type of the first object is a character, storing the first object to the cloud, and displaying a first object identifier corresponding to the first object in the second screen.
3. The method of claim 1, wherein in the case where the object input interface is single and the first object is multiple, the step of adding the first object to the object input interface comprises:
receiving a fourth input to the object input interface in the target input interface;
receiving a fifth input of at least two target object identifications in the second screen, wherein the first object identification contains the target object identifications, and each target object identification corresponds to one target object;
in response to the fourth input and the fifth input, sequentially adding the at least two target objects to the object input interface in the target input interface.
4. The method of claim 1, wherein, in the case that the object input interface is plural and the first object is plural, the step of adding the first object to the object input interface comprises:
sequentially receiving sixth inputs of N target object input interfaces in the target input interface, wherein N is more than or equal to 2;
in response to the sixth input, displaying a second identification of each of the target object input interfaces in the target input interface;
receiving a seventh input of N target object identifiers in the second screen in sequence, wherein the first object identifier includes the N target object identifiers;
in response to the seventh input, displaying the second identifiers associated with the N target object identifiers in the second screen according to the sequence in which the N target object identifiers receive the seventh input and the sequence in which the N target object input interfaces receive the sixth input;
and respectively adding N objects corresponding to the N target object identifications to the target object input interfaces corresponding to the associated second identifications.
5. The method according to claim 1, wherein in the case where the first object is of a plurality of types and the object input interface is of a plurality of types, the step of adding the first object to the object input interface comprises:
searching a first number of object input interfaces of each object input interface type in the target input interface aiming at each object input interface type;
determining a second number of target objects in the first objects which are matched with the type under the condition that the first number is 1;
and adding the target object to the object input interface with the matched type in the target input interface under the condition that the second number is 1.
6. The method of claim 1, wherein after the step of adding the first object to the object input interface, the method further comprises:
receiving an eighth input identifying a first object in the second screen;
and in response to the eighth input, clearing the first object corresponding to the temporarily stored first object identifier, and canceling the display of the first object identifier in the second screen.
7. An object processing apparatus applied to an electronic device including a first screen and a second screen, the apparatus comprising:
the first receiving module is used for receiving a first input of a first object in a first interface displayed by the first screen;
the display module is used for responding to the first input, temporarily storing the first object and displaying a first object identifier corresponding to the first object in the second screen;
the acquisition module is used for acquiring an object input interface matched with the first object type in the target input interface under the condition that the first interface in the first screen is switched to the target input interface;
an adding module for adding the first object to the object input interface;
in a case where the object input interface is plural, the adding module includes:
the first display submodule is used for displaying a first identifier corresponding to each object input interface in the target input interface;
a first receiving submodule for receiving a second input of the first object identifier in the second screen;
a second display sub-module, configured to display, in response to the second input, an identifier list in the second screen, where the identifier list includes second identifiers corresponding to object input interfaces matching the first object type, and the first identifiers include the second identifiers;
the second receiving submodule is used for receiving third input of the target identification in the identification list;
and the first adding submodule is used for responding to the third input and adding the first object to an object input interface corresponding to the target identification in the target input interface.
8. The apparatus of claim 7, wherein the display module comprises:
a first sub-module, configured to determine a target staging mode of the first object in response to the first input, where the target staging mode includes: local temporary storage or cloud temporary storage;
the second sub-module is used for storing the first object to the cloud and temporarily storing the uniform resource locator corresponding to the first object locally under the condition that the target temporary storage mode is cloud temporary storage and the type of the first object is an image or a file;
the third sub-module is used for displaying a first object identifier corresponding to the first object in the second screen;
and the fourth sub-module is used for storing the first object to the cloud under the condition that the target temporary storage mode is cloud temporary storage and the type of the first object is characters, and displaying a first object identifier corresponding to the first object in the second screen.
9. The apparatus of claim 7, wherein in the case that the object input interface is plural, the adding module comprises:
the first display submodule is used for displaying a first identifier corresponding to each object input interface in the target input interface;
a first receiving submodule for receiving a second input of the first object identifier in the second screen;
a second display sub-module, configured to display, in response to the second input, an identifier list in the second screen, where the identifier list includes second identifiers corresponding to object input interfaces matching the first object type, and the first identifiers include the second identifiers;
the second receiving submodule is used for receiving third input of the target identification in the identification list;
and the first adding submodule is used for responding to the third input and adding the first object to an object input interface corresponding to the target identification in the target input interface.
CN202011348838.1A 2020-11-26 2020-11-26 Object processing method and device Active CN112416230B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011348838.1A CN112416230B (en) 2020-11-26 2020-11-26 Object processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011348838.1A CN112416230B (en) 2020-11-26 2020-11-26 Object processing method and device

Publications (2)

Publication Number Publication Date
CN112416230A CN112416230A (en) 2021-02-26
CN112416230B true CN112416230B (en) 2022-04-15

Family

ID=74842555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011348838.1A Active CN112416230B (en) 2020-11-26 2020-11-26 Object processing method and device

Country Status (1)

Country Link
CN (1) CN112416230B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104267804A (en) * 2014-09-15 2015-01-07 联想(北京)有限公司 Information input method and electronic device
CN107861668A (en) * 2017-11-30 2018-03-30 努比亚技术有限公司 Take down notes storage method, terminal and storage medium
CN107992371A (en) * 2017-11-30 2018-05-04 努比亚技术有限公司 Replicate method of attaching, device and computer-readable recording medium
CN109710130A (en) * 2018-12-27 2019-05-03 维沃移动通信有限公司 A kind of display methods and terminal
CN110007835A (en) * 2019-03-27 2019-07-12 维沃移动通信有限公司 A kind of method for managing object and mobile terminal
CN110134310A (en) * 2019-05-23 2019-08-16 网易(杭州)网络有限公司 Content share method and device, electronic equipment and storage medium
EP3671405A1 (en) * 2018-12-17 2020-06-24 Beijing Xiaomi Mobile Software Co., Ltd. Method for operating a smart device, apparatus and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104267804A (en) * 2014-09-15 2015-01-07 联想(北京)有限公司 Information input method and electronic device
CN107861668A (en) * 2017-11-30 2018-03-30 努比亚技术有限公司 Take down notes storage method, terminal and storage medium
CN107992371A (en) * 2017-11-30 2018-05-04 努比亚技术有限公司 Replicate method of attaching, device and computer-readable recording medium
EP3671405A1 (en) * 2018-12-17 2020-06-24 Beijing Xiaomi Mobile Software Co., Ltd. Method for operating a smart device, apparatus and storage medium
CN109710130A (en) * 2018-12-27 2019-05-03 维沃移动通信有限公司 A kind of display methods and terminal
CN110007835A (en) * 2019-03-27 2019-07-12 维沃移动通信有限公司 A kind of method for managing object and mobile terminal
CN110134310A (en) * 2019-05-23 2019-08-16 网易(杭州)网络有限公司 Content share method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112416230A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
CN112486444B (en) Screen projection method, device, equipment and readable storage medium
CN112399006B (en) File sending method and device and electronic equipment
CN111913616A (en) Application program management method and device and electronic equipment
CN113179205B (en) Image sharing method and device and electronic equipment
CN111857460A (en) Split screen processing method, split screen processing device, electronic equipment and readable storage medium
CN112698762B (en) Icon display method and device and electronic equipment
CN112399010B (en) Page display method and device and electronic equipment
CN113849092A (en) Content sharing method and device and electronic equipment
CN113703634A (en) Interface display method and device
CN113590008A (en) Chat message display method and device and electronic equipment
CN113253883A (en) Application interface display method and device and electronic equipment
CN112416199A (en) Control method and device and electronic equipment
CN112416230B (en) Object processing method and device
CN113872849B (en) Message interaction method and device and electronic equipment
CN112291412B (en) Application program control method and device and electronic equipment
CN115291778A (en) Display control method and device, electronic equipment and readable storage medium
CN114489414A (en) File processing method and device
CN112783998A (en) Navigation method and electronic equipment
CN112698771A (en) Display control method, display control device, electronic equipment and storage medium
CN112286611A (en) Icon display method and device and electronic equipment
CN112035032B (en) Expression adding method and device
CN113037618B (en) Image sharing method and device
CN113873081B (en) Method and device for sending associated image and electronic equipment
CN114237979A (en) Data switching method and device and electronic equipment
CN117082056A (en) File sharing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant