CN115657911A - Object processing method and device - Google Patents

Object processing method and device Download PDF

Info

Publication number
CN115657911A
CN115657911A CN202211330380.6A CN202211330380A CN115657911A CN 115657911 A CN115657911 A CN 115657911A CN 202211330380 A CN202211330380 A CN 202211330380A CN 115657911 A CN115657911 A CN 115657911A
Authority
CN
China
Prior art keywords
electronic device
input
electronic
processing
present application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211330380.6A
Other languages
Chinese (zh)
Inventor
陈琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202211330380.6A priority Critical patent/CN115657911A/en
Publication of CN115657911A publication Critical patent/CN115657911A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an object processing method and device, and belongs to the field of computers. The object processing method is applied to a first electronic device, and comprises the following steps: receiving a first input of a user under the condition that the first electronic equipment and the second electronic equipment establish communication connection and a first object in the first electronic equipment and a second object in the second electronic equipment are both selected; determining a master device from the first electronic device and the second electronic device according to the first input; under the condition that the first electronic device is determined to be a main device, responding to the first input, and performing target operation on the first object and the second object to obtain a third object; and displaying the third object.

Description

Object processing method and device
Technical Field
The application belongs to the field of computers, and particularly relates to an object processing method and device.
Background
With the rapid development of electronic devices, people may record texts, take pictures and the like by using electronic devices in work and life, and the recorded texts and the taken pictures may be different in different electronic devices.
At present, when a user needs to perform operations such as splicing and merging on resources in different electronic devices, the user needs to send a resource in one of the electronic devices (for example, the electronic device a) to another electronic device (for example, the electronic device B), and then perform operations such as splicing and merging on the resource sent by the electronic device a and the resource on the electronic device B, so that the operations are complex and the user experience is low.
Disclosure of Invention
The embodiment of the application aims to provide an object processing method and device. The problem of when operating the resource on different electronic equipment among the prior art, operate complicacy is solved.
In a first aspect, an embodiment of the present application provides an object processing method, where the method is applied to a first electronic device, and the method includes:
receiving a first input of a user under the condition that the first electronic equipment and the second electronic equipment establish communication connection and a first object in the first electronic equipment and a second object in the second electronic equipment are both selected;
determining a master device from the first electronic device and the second electronic device according to the first input;
under the condition that the first electronic device is determined to be a main device, responding to the first input, and performing target processing on the first object and the second object to obtain a third object;
and displaying the third object.
In a second aspect, an embodiment of the present application provides an object processing apparatus, where the apparatus is applied to a first electronic device, and the apparatus includes:
the receiving module is used for receiving a first input of a user under the condition that the first electronic device and the second electronic device establish communication connection and a first object in the first electronic device and a second object in the second electronic device are both selected;
a determining module, configured to determine, according to the first input, a master device from the first electronic device and the second electronic device;
the processing module is used for responding to the first input and performing target processing on the first object and the second object to obtain a third object under the condition that the first electronic device is determined to be a main device;
and the display module is used for displaying the third object.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
The object processing method provided by the embodiment of the application can be applied to first electronic equipment, communication connection is established between the first electronic equipment and second electronic equipment, and under the condition that a first object in the first electronic equipment and a second object in the second electronic equipment are both selected, the main equipment is determined from the first electronic equipment and the second electronic equipment according to first input of a user, under the condition that the main equipment is the first electronic equipment, the first object and the second object can be directly subjected to target processing in response to the first input to obtain a third object, and the third object is displayed.
Drawings
FIG. 1 is a flow diagram illustrating an object processing method according to an exemplary embodiment;
FIG. 2 is a schematic diagram of a communication connection between a first electronic device and a second electronic device provided by an exemplary embodiment;
FIG. 3 is a schematic illustration of a first gesture input provided by an exemplary embodiment;
FIG. 4 is a schematic diagram of a grab function provided by an exemplary embodiment;
FIG. 5 is one of the selected schematics of a first object and a second object provided by an exemplary embodiment;
FIG. 6 is a second selected diagram of the first object and the second object provided by an exemplary embodiment;
FIG. 7 is a third selected schematic diagram of the first object and the second object provided by an exemplary embodiment;
FIG. 8 is one of several schematic diagrams of a third object provided by an exemplary embodiment;
FIG. 9 is a second schematic diagram of a third object provided in an exemplary embodiment;
FIG. 10 is a third schematic diagram of a third object provided in an exemplary embodiment;
fig. 11 is a schematic structural diagram of an object processing apparatus according to an exemplary embodiment;
FIG. 12 is a schematic diagram of an electronic device shown in an exemplary embodiment;
fig. 13 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application are capable of operation in sequences other than those illustrated or described herein, and that the terms "first," "second," etc. are generally used in a generic sense and do not limit the number of terms, e.g., a first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/", and generally means that the former and latter related objects are in an "or" relationship.
As described in the background section, in order to solve the above problem, embodiments of the present application provide an object processing method, an apparatus, an electronic device, and a storage medium, which may be applied to a first electronic device, where the first electronic device and a second electronic device establish a communication connection, and a second object in the first object and the second electronic device in the first electronic device is selected, a main device is determined from the first electronic device and the second electronic device according to a first input of a user, and when the main device is the first electronic device, in response to the first input, the main device may directly perform target processing on the first object and the second object to obtain a third object, and display the third object, so that only through the first input, the resources on different electronic devices may be subjected to target processing, steps when the user needs to process the resources on different electronic devices are reduced, and operations are simple, and user experience is improved.
The object processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings by using specific embodiments and application scenarios thereof.
Fig. 1 is a schematic flowchart of an object processing method provided in an embodiment of the present application, where an execution subject of the object processing method may be a first electronic device, and it should be noted that the execution subject does not constitute a limitation to the embodiment of the present application.
In the embodiment of the present application, the first electronic device may be, but is not limited to, a Personal Computer (PC), a smart phone, a tablet Computer, a Personal Digital Assistant (PDA), or the like.
As shown in fig. 1, the object processing method provided in the embodiment of the present application may include steps 110 to 140.
Step 110, receiving a first input of a user when the first electronic device and the second electronic device establish a communication connection and a first object in the first electronic device and a second object in the second electronic device are both selected.
The first electronic device and the second electronic device may be two electronic devices which need to operate the resource therein. For example, the first electronic device may be cell phone a and the second electronic device may be cell phone B.
In an example, referring to fig. 2, taking the first electronic device as a mobile phone a and the second electronic device as a mobile phone B as an example, the establishing of the communication connection between the first electronic device and the second electronic device may be that the mobile phone a and the mobile phone B are in the same network environment.
The first object may be an object in the first electronic device, and may be, for example, text information saved in the first electronic device, a picture, a document file, or the like.
The second object may be an object in the second electronic device, and may be, for example, text information saved in the second electronic device, a picture, a document file, or the like.
In some embodiments of the present application, the first object and the second object may be objects of the same type, for example, if the first object is textual information, the second object may also be textual information. If the first object is a picture, the second object may also be a picture, so that the subsequent target processing on the first object and the second object is facilitated.
The first input may be for determining a master device of the first electronic device and the second electronic device, and performing object processing on the first object and the second object to obtain a third object.
In some embodiments of the present application, the first input may specifically be a gesture input, the first input may include at least two fingers, and the first input may be an input on the first electronic device and the second electronic device.
In some embodiments of the present application, the first input may be a grabbing action formed by at least two fingers of one hand of the user on the first electronic device and the second electronic device simultaneously, specifically as shown in fig. 3, taking the first electronic device as a mobile phone a and the second electronic device as a mobile phone B as an example, the thumb and the remaining four fingers of the user are placed on the screens of the mobile phone a and the mobile phone B separately, and the first input may be a grabbing action formed by the remaining four fingers and the thumb.
In some embodiments of the present application, before the first object in the first electronic device and the second object in the second electronic device are both selected, the above object processing method may further include:
receiving a second input of the user to the preset control;
in response to the second input, it is determined that the first electronic device and the second electronic device enter an information capture state.
The preset control can be a preset control, and the control can enable the first electronic device and the second electronic device to enter an information capturing state.
The second input may be for determining that the first electronic device and the second electronic device enter an information capture state. The second input may be, but not limited to, a click input, a double-click input, a gesture input, a voice input, etc. to a preset control, and may also be a combination of any two or more of the above inputs, which is not limited herein.
The information capture state may be a state in which information in the first electronic device and the second electronic device is captured.
In an example, referring to fig. 4, taking the first electronic device as a mobile phone a and the second electronic device as a mobile phone B as an example, the mobile phone a and the mobile phone B both have a "capture function" control 41 (i.e., a preset control), and when the user clicks the "capture function" control 41 in both the mobile phone a and the mobile phone B, the mobile phone a and the mobile phone B enter an information capture state.
In some embodiments of the application, after determining that the first electronic device and the second electronic device enter the information capture state, the object processing method may further include:
receiving a third input of the first object and the second object from the user;
in response to a third input, the first object and the second object are selected.
The third input is used for selecting the first object and the second object, and the third input may be, but is not limited to, a click input, a double-click input, a long-press input, a gesture input, a voice input, or the like for the first object and the second object, and may also be a combination of at least two of the foregoing inputs, which is not limited herein.
The following describes how to select the first object and the second object by taking the first object and the second object as text information, the first object and the second object as pictures, and the first object and the second object as document files, respectively:
(1) The first object and the second object are both text information
In an example, referring to fig. 5, taking the first electronic device as a mobile phone a and the second electronic device as a mobile phone B as an example, the first object is text information "AABB", the second object in the mobile phone B is text information "CCDD", and how to select text information is described below by taking an operation of the mobile phone a as an example: the user presses the text information 'AABB' in the mobile phone A for a long time to display the copy control 51, and then clicks the copy control 51, the text information 'AABB' is copied, namely the text information 'AABB' is selected.
The text information "CCDD" in the mobile phone B is copied in the same manner, and the selected mode is consistent with the text information "AABB", which is not described herein again.
(2) The first object and the second object are both pictures
In an example, referring to fig. 6, taking the first electronic device as a mobile phone a and the second electronic device as a mobile phone B as an example, the first object is a picture 61, the second object is a picture 62, and the user presses the picture 61 and the picture 62 for a long time to display a "√" mark 63, which indicates that both the picture 61 and the picture 62 are selected.
(3) The first object and the second object are both document files
In one example, referring to fig. 7, taking the first electronic device as a mobile phone a and the second electronic device as a mobile phone B as an example, the first object is a document file 71, the second object is a document file 72, and the user presses the document file 71 and the document file 72 for a long time to display a "√" mark 73, which indicates that both the document file 71 and the document file 72 are selected.
In the embodiment of the application, the first electronic device and the second electronic device are determined to enter the information grabbing state by responding to the first input of the preset control, and in the information grabbing state, the first object and the second object can be determined to be selected by responding to the second input of the user to the first object and the second object, so that misoperation of the user is avoided, and the accuracy of information grabbing is improved.
In some embodiments of the present application, for the first object and the second object, when it is determined that the first object and the second object are selected, an object selected at substantially the same time in the first device and the second device may be further used as the first object and the second object, respectively.
Step 120, determining a master device from the first electronic device and the second electronic device according to the first input.
Wherein the master device may be a device for performing the target process.
In some embodiments of the present application, in order to enhance the user experience, step 120 may specifically include:
determining the position of a target finger according to the gesture input;
and determining the electronic equipment corresponding to the position of the target finger as the main equipment.
Wherein the target finger may be a user-specified finger for determining the master device. For example, the target finger may be the thumb.
In one example, with continued reference to fig. 3, the user performs the first input with the target finger being the thumb, which is the primary device since it is located on cell phone a.
In the embodiment of the application, the electronic device corresponding to the position of the target finger is determined as the main device according to the position of the target finger, so that the main device can be rapidly determined directly according to the gesture operation of the first input of the user without other settings, and the interaction experience of the user and the electronic device is improved.
And step 130, under the condition that the first electronic device is determined to be the main device, responding to the first input, and performing target processing on the first object and the second object to obtain a third object.
The third object may be an object obtained by performing target processing on the first object and the second object.
In some embodiments of the present application, where the master device is determined, the first object and the second object may be subject to target processing on the master device, and the third object may be obtained.
The following describes how to perform the target processing on the first object and the second object by taking the first object and the second object as text information, the first object and the second object as pictures, and the first object and the second object as document files, respectively.
(1) The first object and the second object are both text information
In a case that the first object and the second object are both text information, performing target processing on the first object and the second object to obtain a third object may specifically include:
after pasting the second object to the first object, a third object is obtained.
In an example, taking the first electronic device as a mobile phone a and the second electronic device as a mobile phone B as an example, as shown in fig. 5, the first object is text information "AABB", the second object is text information "CCDD", and if the user performs the first input shown in fig. 3, it is determined that the mobile phone a is the main device, and the text information "AABBCCDD" (shown in fig. 8) is obtained after the text information "CCDD" is pasted on the mobile phone a to the text information "AABB" according to the arrangement order of the text information "AABB" first and the text information "CCDD" second, where the third object is "AABBCCDD" in fig. 8.
In the embodiment of the application, under the condition that the first object and the second object are both text information, the second object can be quickly pasted to the first object according to one gesture input to obtain the third object, and the third object is displayed on one of the electronic devices, so that the text information on the two electronic devices can be conveniently combined, and the user experience is improved.
(2) The first object and the second object are both pictures
In a case that the first object and the second object are both pictures, performing target processing on the first object and the second object to obtain a third object, which may specifically include:
the second object is merged with the first object to obtain a third object.
In some embodiments of the present application, the merging of the second object with the first object as described above may be fusing the second object with the first object.
In an example, taking the first electronic device as a mobile phone a and the second electronic device as a mobile phone B as an example, as shown in fig. 6, the first object is a picture 61, the picture 61 includes a graph 1, the second object is a picture 62, and the picture 62 includes a graph 2, if the user performs the first input shown in fig. 3, it may be determined that the mobile phone a is the main device, and if the target processing operation is to fuse the second object and the first object, the graph 2 may be placed in the picture 61 to obtain a brand-new picture 91 (fig. 9) with the original background of the picture 61 as the background and the graph 1 and the graph 2 as the background, and the brand-new picture 91 is displayed on the mobile phone a, where the third object is the picture in fig. 9.
In some embodiments of the present application, the merging of the second object with the first object as described above may also be stitching the second object with the first object.
In an example, taking the first electronic device as a mobile phone a and the second electronic device as a mobile phone B as an example, as shown in fig. 6, the first object is a picture 61 and the second object is a picture 62, if the user performs the first input shown in fig. 3, it may be determined that the mobile phone a is the main device, and if the target processing operation is to splice the second object and the first object, the picture 62 may be spliced to the picture 61, and then a completely new long picture (i.e., a third object) may be obtained.
In some embodiments of the present application, whether the target process is direct stitching or fusion may be related to an input parameter of the first input, for example, when the pressing pressure is greater than or equal to a threshold value in the case of gesture input, the second object and the first object may be subjected to fusion processing, and when the pressing pressure is less than the threshold value, the second object and the first object may be subjected to stitching processing.
In some embodiments of the present application, whether the target process is direct stitching or fusion may also be automatically determined according to the contents of the two pictures, for example: if the objects included in the two pictures are the same object (for example, the two pictures both include the same person, that is, the two single photos of the same person), the splicing operation can be automatically performed on the two pictures. If the objects included in the two pictures are not the same object (for example, the two pictures both include a person but are not the same person, i.e., two single photos of different persons), the two pictures can be fused to generate a group photo.
Specifically, what kind of method is adopted to determine whether the target processing is direct splicing or fusion, and the target processing can be set according to the user requirements, which is not limited here.
In the embodiment of the application, under the condition that the first object and the second object are both pictures, the second object and the first object can be quickly combined according to one gesture input to obtain the third object, so that the pictures on the two electronic devices are conveniently combined, and the user experience is improved.
(3) The first object and the second object are both document files
In a case that the first object and the second object are both document files, performing target processing on the first object and the second object to obtain a third object, which may specifically include:
comparing the first document file with the second document file to obtain difference information;
the difference information is marked on the first object to obtain a third object.
Wherein the first document file may be a document file on the first electronic device.
The second document file may be a document file on a second electronic device.
The difference information may be different information in the first document file and the second document file.
In an example, taking the first electronic device as a mobile phone a and the second electronic device as a mobile phone B as an example, as shown in fig. 7, the first object is a document file 71, the document file 71 includes information "ABCDEFG", the second object is a document file 72, the document file 72 includes information "abcdefy", if the user performs the first gesture input shown in fig. 3, the mobile phone a may be determined as a main device, the information "ABCDEFG" in the document file 71 and the information "abcdefy" in the document file 72 are compared, difference information "EFG" in the document file 71 and the document file 72 is found, and then the difference information "EFG" is marked on the document file 71, specifically, the difference information may be displayed in an emphasized manner (as shown in fig. 10), and the marked document is displayed on the mobile phone a, where the third object is the document file marked with difference information "EFG" in fig. 10.
In the embodiment of the application, under the condition that the first object and the second object are both document files, the first document file and the second document file can be compared to obtain the difference information, and then the difference information is marked on the first object to obtain the third object, so that the document files on the two electronic devices can be simply and conveniently compared, and the user experience is improved.
And step 140, displaying the third object.
In some embodiments of the present application, after obtaining the third object, the third object may be displayed, as shown in fig. 8-10.
In the object processing method provided by the embodiment of the application, the execution main body can be an object processing device. In the embodiment of the present application, an object processing apparatus executing an object processing method is taken as an example, and the object processing apparatus provided in the embodiment of the present application is described.
Fig. 11 is a schematic diagram illustrating a structure of an object processing apparatus according to an exemplary embodiment.
As shown in fig. 11, the object processing apparatus 1100 may be applied to a first electronic device, and the object processing apparatus 1100 may include:
a receiving module 1110, configured to receive a first input of a user when the first electronic device and the second electronic device establish a communication connection and a first object in the first electronic device and a second object in the second electronic device are both selected;
a determining module 1120, configured to determine a master device from the first electronic device and the second electronic device according to the first input;
a processing module 1130, configured to, in response to the first input, perform target processing on the first object and the second object to obtain a third object, if it is determined that the first electronic device is a master device;
a display module 1140 for displaying the third object.
In the embodiment of the application, the receiving module is used for establishing communication connection between the first electronic device and the second electronic device, and receiving a first input of a user under the condition that a first object in the first electronic device and a second object in the second electronic device are all selected, then determining a main device from the first electronic device and the second electronic device according to the first input of the user, under the condition that the main device is the first electronic device, responding to the first input, directly performing target processing on the first object and the second object to obtain a third object, and displaying the third object based on the display module.
In some embodiments of the present application, to further enhance the user experience, both the first object and the second object are text information;
the processing module 1130 may be specifically configured to:
after the second object is pasted to the first object, a third object is obtained.
In some embodiments of the present application, to further enhance the user experience, the first object and the second object are both pictures;
the processing module 1130 may be specifically configured to:
the second object is merged with the first object to obtain a third object.
In some embodiments of the present application, to further enhance the user experience, the first object and the second object are both document files;
the processing module 1130 may be specifically configured to:
comparing the first document file with the second document file to obtain difference information;
and marking the difference information on the first object to obtain a third object.
In some embodiments of the present application, the first input is a gesture input, the first input includes at least two fingers, the first input is an input on the first device and the second device; to enhance the user experience, the determining module 1120 may be specifically configured to:
determining the position of a target finger according to the gesture input;
and determining the electronic equipment corresponding to the position of the target finger as main equipment.
The object processing apparatus in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Network Attached Storage, NAS), a personal computer (NAS), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The object processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which is not specifically limited in the embodiment of the present application.
The object processing apparatus provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 12, an electronic device 1200 is further provided in an embodiment of the present application, and includes a processor 1201 and a memory 1202, where the memory 1202 stores a program or an instruction that can be executed on the processor 1201, and when the program or the instruction is executed by the processor 1201, the steps of the embodiment of the object processing method are implemented, and the same technical effects can be achieved, and are not described again to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 13 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1300 includes, but is not limited to: a radio frequency unit 1301, a network module 1302, an audio output unit 1303, an input unit 1304, a sensor 1305, a display unit 1306, a user input unit 1307, an interface unit 1308, a memory 1309, a processor 1310, and the like.
Those skilled in the art will appreciate that the electronic device 1300 may further comprise a power supply (e.g., a battery) for supplying power to the various components, and the power supply may be logically connected to the processor 1310 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 13 does not constitute a limitation to the electronic device, and the electronic device may include more or less components than those shown in the drawings, or combine some components, or arrange different components, and thus, the description is omitted here.
A user input unit 1307, configured to receive a first input of a user when the first electronic device and the second electronic device establish a communication connection and a first object in the first electronic device and a second object in the second electronic device are both selected;
a processor 1310 configured to determine a master device from the first electronic device and the second electronic device according to the first input; under the condition that the first electronic device is determined to be a main device, responding to the first input, and performing target processing on the first object and the second object to obtain a third object;
a display unit 1306, configured to display the third object.
Therefore, under the conditions that the first electronic device and the second electronic device are in communication connection and the first object in the first electronic device and the second object in the second electronic device are both selected, the main device is determined from the first electronic device and the second electronic device according to the first input of the user, under the condition that the main device is the first electronic device, the first object and the second object can be directly subjected to target processing in response to the first input to obtain the third object, and the third object is displayed.
Optionally, the first object and the second object are both text information;
the processor 1310 is further configured to paste the second object to the first object to obtain a third object.
Therefore, under the condition that the first object and the second object are both text information, the second object can be quickly pasted to the first object according to one gesture input to obtain the third object, and the third object is displayed on one electronic device, so that the text information on the two electronic devices can be conveniently combined, and the user experience is improved.
Optionally, the first object and the second object are both pictures;
the processor 1310 is further configured to merge the second object with the first object to obtain a third object.
Therefore, under the condition that the first object and the second object are both pictures, the second object and the first object can be rapidly combined according to one gesture input so as to obtain the third object, so that the pictures on the two electronic devices are conveniently combined, and the user experience is improved.
Optionally, the first object and the second object are both document files;
a processor 1310, further configured to compare the first document file and the second document file to obtain difference information;
and marking the difference information on the first object to obtain a third object.
Therefore, under the condition that the first object and the second object are both document files, the first document file and the second document file can be compared to obtain the difference information, and then the difference information is marked on the first object to obtain the third object.
Optionally, the first input is a gesture input, the first input includes at least two fingers, and the first input is an input on the first device and the second device.
A processor 1310 further configured to determine a location of a target finger based on the gesture input; and determining the electronic equipment corresponding to the position as main equipment.
Therefore, the electronic equipment corresponding to the position of the target finger is determined as the main equipment through the position of the target finger, so that the main equipment can be rapidly determined directly according to the gesture operation of the first input of the user without other settings, and the interaction experience of the user and the electronic equipment is improved.
It should be understood that in the embodiment of the present application, the input Unit 1304 may include a Graphics Processing Unit (GPU) 13041 and a microphone 13042, and the Graphics processor 13041 processes image data of still pictures or videos obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1306 may include a display panel 13061, and the display panel 13061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1307 includes a touch panel 13071 and at least one of other input devices 13072. A touch panel 13071, also referred to as a touch screen. The touch panel 13071 may include two parts of a touch detection device and a touch controller. Other input devices 13072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 1309 may be used to store software programs as well as various data. The memory 1309 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions required for at least one function (such as a sound playing function, an image playing function, etc.), and the like. Further, memory 1309 can comprise volatile memory or nonvolatile memory, or memory 1309 can comprise both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct bus RAM (DRRAM). Memory 1309 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1310 may include one or more processing units; optionally, the processor 1310 integrates an application processor, which primarily handles operations involving the operating system, user interface, and applications, etc., and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1310.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above object processing method embodiments, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above object processing method embodiment, and can achieve the same technical effect, and is not described here again to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing object processing method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not described here again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatuses in the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions recited, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the present embodiments are not limited to those precise embodiments, which are intended to be illustrative rather than restrictive, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope of the appended claims.

Claims (10)

1. An object processing method, applied to a first electronic device, the method comprising:
receiving a first input of a user under the condition that the first electronic equipment and the second electronic equipment establish communication connection and a first object in the first electronic equipment and a second object in the second electronic equipment are both selected;
determining a master device from the first electronic device and the second electronic device according to the first input;
under the condition that the first electronic device is determined to be a main device, responding to the first input, and performing target processing on the first object and the second object to obtain a third object;
and displaying the third object.
2. The method of claim 1, wherein the first object and the second object are both textual information;
the performing target processing on the first object and the second object to obtain a third object includes:
after the second object is pasted to the first object, a third object is obtained.
3. The method of claim 1, wherein the first object and the second object are both pictures;
the performing target processing on the first object and the second object to obtain a third object includes:
merging the second object with the first object to obtain a third object.
4. The method of claim 1, wherein the first object and the second object are both document files;
the performing target processing on the first object and the second object to obtain a third object includes:
comparing the first document file with the second document file to obtain difference information;
and marking the difference information on the first object to obtain a third object.
5. The method of any of claims 1-4, wherein the first input is a gesture input, the first input includes at least two fingers, the first input is an input on the first device and the second device;
the determining a master device from the first electronic device and the second electronic device according to the first input includes:
determining the position of a target finger according to the gesture input;
and determining the electronic equipment corresponding to the position of the target finger as main equipment.
6. An object processing apparatus, wherein the method is applied to a first electronic device, the apparatus comprising:
the receiving module is used for receiving a first input of a user under the condition that the first electronic device and the second electronic device establish communication connection and a first object in the first electronic device and a second object in the second electronic device are both selected;
a determining module, configured to determine, according to the first input, a master device from the first electronic device and the second electronic device;
the processing module is used for responding to the first input and performing target processing on the first object and the second object to obtain a third object under the condition that the first electronic device is determined to be a main device;
and the display module is used for displaying the third object.
7. The apparatus of claim 6, wherein the first object and the second object are both textual information;
the processing module is specifically configured to:
after the second object is pasted to the first object, a third object is obtained.
8. The apparatus of claim 6, wherein the first object and the second object are both pictures;
the processing module is specifically configured to:
merging the second object with the first object to obtain a third object.
9. The apparatus of claim 6, wherein the first object and the second object are both document files;
the processing module is specifically configured to:
comparing the first document file with the second document file to obtain difference information;
and marking the difference information on the first object to obtain a third object.
10. The apparatus of any of claims 6-9, wherein the first input is a gesture input, wherein the first input comprises at least two fingers, and wherein the first input is an input on the first device and the second device;
the determining module is specifically configured to:
determining the position of a target finger according to the gesture input;
and determining the electronic equipment corresponding to the position of the target finger as main equipment.
CN202211330380.6A 2022-10-27 2022-10-27 Object processing method and device Pending CN115657911A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211330380.6A CN115657911A (en) 2022-10-27 2022-10-27 Object processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211330380.6A CN115657911A (en) 2022-10-27 2022-10-27 Object processing method and device

Publications (1)

Publication Number Publication Date
CN115657911A true CN115657911A (en) 2023-01-31

Family

ID=84993922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211330380.6A Pending CN115657911A (en) 2022-10-27 2022-10-27 Object processing method and device

Country Status (1)

Country Link
CN (1) CN115657911A (en)

Similar Documents

Publication Publication Date Title
TWI515641B (en) Method and system for altering icon in desktop
WO2023061280A1 (en) Application program display method and apparatus, and electronic device
CN113467660A (en) Information sharing method and electronic equipment
CN112948844B (en) Control method and device and electronic equipment
CN113849092A (en) Content sharing method and device and electronic equipment
WO2024160133A1 (en) Image generation method and apparatus, electronic device, and storage medium
CN111638839A (en) Screen capturing method and device and electronic equipment
CN114518822A (en) Application icon management method and device and electronic equipment
CN112399010B (en) Page display method and device and electronic equipment
CN112181252B (en) Screen capturing method and device and electronic equipment
CN112306320A (en) Page display method, device, equipment and medium
CN115729412A (en) Interface display method and device
CN115421631A (en) Interface display method and device
CN114679546A (en) Display method and device, electronic equipment and readable storage medium
CN111796733B (en) Image display method, image display device and electronic equipment
CN115657911A (en) Object processing method and device
CN114564921A (en) Document editing method and device
CN113726953A (en) Display content acquisition method and device
CN104866303A (en) Information processing method and mobile terminal
CN117082056A (en) File sharing method and electronic equipment
CN118733183A (en) Processing method and device and electronic equipment
CN114519859A (en) Text recognition method, text recognition device, electronic equipment and medium
CN114860122A (en) Application program control method and device
CN118689577A (en) Display method and device of screen locking wallpaper
CN115904095A (en) Information input method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination