CN112988009A - File processing method and device - Google Patents

File processing method and device Download PDF

Info

Publication number
CN112988009A
CN112988009A CN202110272343.3A CN202110272343A CN112988009A CN 112988009 A CN112988009 A CN 112988009A CN 202110272343 A CN202110272343 A CN 202110272343A CN 112988009 A CN112988009 A CN 112988009A
Authority
CN
China
Prior art keywords
input
target
target object
receiving
identifications
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110272343.3A
Other languages
Chinese (zh)
Inventor
郭美圆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110272343.3A priority Critical patent/CN112988009A/en
Publication of CN112988009A publication Critical patent/CN112988009A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0414Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using force sensing means to determine a position

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a file processing method and electronic equipment, belongs to the technical field of electronics, and aims to solve the problems that when a user shares files to multiple friends, the user operation is complicated and the file sharing efficiency is low due to the fact that the number of steps of repeated operation of the user is large. The file processing method comprises the following steps: receiving a first input to a target file and a target application; displaying at least two object identifications in the target application in response to the first input; receiving a second input identifying at least two target objects; responding to the second input, and respectively sending the target file to the objects indicated by the at least two target object identifications through the target application program; wherein the second input acts on each of the at least two target object identifications.

Description

File processing method and device
Technical Field
The application belongs to the technical field of electronics, and particularly relates to a file processing method and device.
Background
Currently, social software in electronic devices has become a major social tool for people. Often, people share local files to friends through social software. For example, as new media users continue to expand, short videos are rapidly evolving with their "short, flat, fast" features and advantages of content diversification. When a user browses interested short videos, the short videos need to be downloaded to the local, and then the short videos are shared to friends in social software through the social software.
In one scenario, a user wants to share a file to multiple friends in social software, and one implementation manner is as follows: repeating the operation of sharing the file to a friend; the other realization mode is as follows: and selecting the option of 'multi-selection' and repeatedly clicking the operation of the friend.
Therefore, in the prior art, when a user shares a file to a plurality of friends, the user operation is complicated due to more steps of repeated operation of the user, and the file sharing efficiency is low.
Disclosure of Invention
The embodiment of the application aims to provide a file processing method, which can solve the problems that when a user shares a file to a plurality of friends, the user is complicated to operate and the file sharing efficiency is low due to the fact that the number of steps of repeated operation of the user is large.
In a first aspect, an embodiment of the present application provides a file processing method, where the method includes: receiving a first input to a target file and a target application; displaying at least two object identifications in the target application in response to the first input; receiving a second input identifying at least two target objects; responding to the second input, and respectively sending the target file to the objects indicated by the at least two target object identifications through the target application program; wherein the second input acts on each of the at least two target object identifications.
In a second aspect, an embodiment of the present application provides a document processing apparatus, including: the first input receiving module is used for receiving first input of a target file and a target application program; a first input response module for displaying at least two object identifications in the target application in response to the first input; the second input receiving module is used for receiving second input of at least two target object identifications; a second input response module, configured to respond to the second input, and send the target file to the objects indicated by the at least two target object identifiers through the target application program, respectively; wherein the second input acts on each of the at least two target object identifications.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
Thus, in the embodiment of the present application, in the case where the target file and the target application are selected by the user through the first input, at least two object identifiers in the target application are displayed, and each object identifier is used for indicating a unique object. Further, the user may select the corresponding at least two target objects by performing a second input on the at least two target object identifications. Wherein the second input is applied to each of the at least two target object identifications, i.e. one input is made for all of the at least two target object identifications in total. For example, the finger is always in a state of pressing the screen until the user completes the selection of at least two target object identifications. Thus, in response to the second input, the target file is simultaneously sent to the object indicated by the at least two target object identifications selected by the user. Therefore, the method and the device are universal for single-person sharing scenes and multi-person sharing scenes, and a user can switch any scene at any time without selecting the single-person sharing scene or the multi-person sharing scene in advance; particularly, in a multi-user sharing scene, a user does not need to repeatedly share the operation of a single user or repeatedly select the operation of multiple users after selecting a multi-selection option. In summary, in the embodiment, when the user shares the file with the plurality of friends, the user operation is simplified, and the file sharing efficiency is improved.
Drawings
FIG. 1 is a flow chart of a document processing method of an embodiment of the present application;
FIG. 2 is a schematic view of a display interface of an electronic device according to an embodiment of the present application;
fig. 3 is a second schematic view of a display interface of the electronic device according to the embodiment of the present application;
FIG. 4 is a schematic diagram illustrating an operation of a document processing method in a display interface according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating an operation of a document processing method in a display interface according to an embodiment of the present application;
FIG. 6 is a block diagram of a document processing apparatus of an embodiment of the present application;
fig. 7 is a hardware configuration diagram of an electronic device according to an embodiment of the present application.
Fig. 8 is a second schematic diagram of a hardware structure of the electronic device according to the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes the document processing method provided by the embodiment of the present application in detail through a specific embodiment and an application scenario thereof with reference to the accompanying drawings.
Fig. 1 shows a flowchart of a document processing method according to an embodiment of the present application, the method including:
step S1: a first input is received for a target file and a target application.
The first input comprises touch input performed by a user on a screen, and is not limited to input of clicking, sliding, dragging and the like; the first input may also be a first operation, where the first operation includes a blank operation of the user, and is not limited to a gesture action operation, a face action operation, and the like, and the first operation also includes an operation of a physical key on the device, and is not limited to a press operation and the like. Furthermore, the first input includes one or more inputs, wherein the plurality of inputs may be continuous or intermittent.
Optionally, the target file is a local file.
Optionally, the target file includes types of video, music, documents, and so forth.
Alternatively, the target application includes chat software or the like.
Referring to fig. 2, the application scenarios are as follows: opening the directory position of the local video, selecting any local video file to be shared (such as 1.MP4 in the figure) as a target file, clicking a sharing control in the interface where the local video file is located, thereby popping up a sharing panel shown at the lower part in the figure, and then clicking a control of any chat software displayed on the sharing panel. For example, clicking "chat software 1" is a control, thereby determining "chat software 1" as the target application in the present embodiment.
Step S2: at least two object identifications in the target application are displayed in response to the first input.
The purpose of this embodiment is: a new interaction mode is provided, so that a user can share a local file to a plurality of chat friends at the same time. Thus, in this step, based on the target application selected in the first input, the number of object identifiers in the application is displayed as at least two.
The mark in the present application is used for indicating words, symbols, images, interfaces, time and the like of information, and a control or other container can be used as a carrier for displaying information, including but not limited to a word mark, a symbol mark, an image mark.
In this embodiment, the object identifier is used to indicate an object in the target application, that is, a social friend, and the object identifier includes multiple forms, such as an avatar, an icon, and a display bar.
Optionally, at least two object identifiers may be displayed sequentially in order of the chat time from the near to the far.
Optionally, the user may also input key search information in a search bar of the interface display, so that at least two object identifiers may be displayed in order of the degree of association with the key search information from large to small.
Alternatively, at least two object identifications may be displayed in order of the first letter of the first word of the social name.
Step S3: a second input is received identifying at least two target objects.
Wherein the second input acts on each of the at least two target object identifications.
Wherein the at least two object identifications displayed in step S2 include at least two target object identifications in the step.
The second input in this embodiment includes a touch input performed by the user on the screen, and is not limited to input such as clicking, sliding, dragging, and the like; the second input may also be a second operation, where the second operation includes a blank operation of the user, and is not limited to a gesture action operation, a face action operation, and the like, and the second operation also includes an operation of the user on a physical key on the device, and is not limited to a press operation and the like. Also, the second input comprises one input.
The second input in this embodiment needs to act on each of the at least two target object identifiers, that is, the user needs to perform one input for all of the at least two target object identifiers in total.
Further, the implementation manner of the second input in this step needs to achieve at least two effects, on one hand, to lock the target object identifiers, and on the other hand, to connect the locked target object identifiers together through the input.
In an application scenario, for example, a user uses friend 1, friend 2, and friend 3 as target objects in this embodiment, respectively. After the user presses the identification of the friend 1 on the screen for a long time, the user does not loosen the identification, then moves to the identification of the friend 2 according to a certain route, presses the identification of the friend 2 for a long time, does not loosen the identification, then moves to the identification of the friend 3 according to a certain route, presses the identification of the friend 3 for a long time, and finally loosens the identification, so that the target object is selected.
In the input of the above example, the long press is used for locking the target object identifiers, and the long press is moved according to a certain route to connect the locked target object identifiers together through the input.
In another example, the user uses friend 1, friend 2, and friend 3 as target objects in this embodiment. And the user selects the identifier of the friend 1 through a specific gesture at intervals, stays for a certain time, continues to use the gesture, moves to the identifier of the friend 2 according to a certain route, stays for the certain time, continues to use the gesture, moves to the identifier of the friend 3 according to the certain route, stays for the certain time, ends the gesture, and finishes the selection of the target object.
In the input of the above example, a specific gesture is used to lock the target object identifiers, and the specific gesture is moved according to a certain route to connect the locked target object identifiers together through the input.
Step S4: and responding to the second input, and respectively sending the target file to the objects indicated by the at least two target object identifications through the target application program.
In the step, in response to the second input in the previous step, at least two target object identifications selected in the input are recognized, and the target file is sent to the object indicated by the at least two target object identifications.
Specifically, target identification information (such as an account number, a name, and the like) corresponding to the target object identifier may be obtained according to the target object identifier, and then the target file selected in the first input may be sent to the chat window where the at least two target objects are located according to the target identification information.
Optionally, in this step, in response to the second input, at least two target object identifiers selected in the second input are obtained, and the selected target object identifiers may be displayed in a search bar of the interface display. Referring to fig. 3, firstly, after the user confirms the selected target object, the user can click the "send" control displayed on the interface to complete file sharing; secondly, the user can continuously select other target objects; thirdly, the user can also input the cancellation of the selected arbitrary target object identifier displayed in the search bar so as to help the user accurately send the target file to the specified friend.
Thus, in the embodiment of the present application, in the case where the target file and the target application are selected by the user through the first input, at least two object identifiers in the target application are displayed, and each object identifier is used for indicating a unique object. Further, the user may select the corresponding at least two target objects by performing a second input on the at least two target object identifications. Wherein the second input is applied to each of the at least two target object identifications, i.e. one input is made for all of the at least two target object identifications in total. For example, the finger is always in a state of pressing the screen until the user completes the selection of at least two target object identifications. Thus, in response to the second input, the target file is simultaneously sent to the object indicated by the at least two target object identifications selected by the user. Therefore, the method and the device are universal for single-person sharing scenes and multi-person sharing scenes, and a user can switch any scene at any time without selecting the single-person sharing scene or the multi-person sharing scene in advance; particularly, in a multi-user sharing scene, a user does not need to repeatedly share the operation of a single user or repeatedly select the operation of multiple users after selecting a multi-selection option. In summary, in the embodiment, when the user shares the file with the plurality of friends, the user operation is simplified, and the file sharing efficiency is improved.
In the flow of the file processing method according to another embodiment of the present application, step S3 includes:
substep A1: and receiving touch input of sliding of the at least two target object identifications along the first preset direction under the condition that the at least two target object identifications are adjacently displayed along the first preset direction.
Wherein the sliding touch input acts on each of the at least two target object identifications.
The sliding touch input in this embodiment needs to act on each of the at least two target object identifiers, that is, the user needs to perform a total of one sliding touch input on all of the at least two target object identifiers.
In the application scenario of the embodiment, for example, a plurality of chat friends to be shared are displayed on a current chat interface and are located closely together; in another example of the application scenario of this embodiment, a plurality of chat friends to be shared are not displayed on the current chat interface, and the positions of the chat friends are close to each other after searching for keywords.
Referring to fig. 4, for example, based on the above application scenario, a user may take friend 1, friend 2, and friend 3 as target objects in the present embodiment respectively.
Referring to arrows and fingers shown in fig. 4, optionally, the sliding touch input of the present embodiment is as follows: the user presses the head portrait frame of the friend 1 for a long time, slides down to the head portrait frame of the friend 2 after selecting the friend 1, presses the head portrait frame of the friend 2 for a long time, slides down to the head portrait frame of the friend 3 after selecting the friend 2, selects the head portrait frame of the friend 3, and releases the fingers after selecting the friend 3.
After the user presses the head portrait frame of the friend for a long time, the head portrait frame of the friend can be highlighted in a preset mode to indicate that the friend has selected.
Referring to fig. 4, optionally, the first preset direction is a top-down direction.
Illustratively, the target object identifiers in the embodiment are displayed in a top-down order, so that the direction along which the user inputs the target object identifiers is also from top to bottom, and the user input mode is concise and convenient, so as to achieve the purpose of simplifying the user operation.
Alternatively, the sliding touch input in the present embodiment may be a straight sliding touch input.
It should be noted that the sliding touch input in the present embodiment is used to define the overall effect of the input, so as to be in sharp contrast with the curved sliding touch input, and is not necessarily a strict straight line.
In the present embodiment, an applicable scenario of the present embodiment is shown. In the application scenario, at least two target object identifications are adjacently displayed in the target objects to be selected by the user, and most ideally, all the target object identifications are adjacently displayed, so that the user can select at least two target object identifications or even all the target object identifications at one time through sliding touch input, and the purpose of simplifying the user is achieved. The user can perform sliding touch input close to a straight line according to a first preset direction so as to ensure that the target object is selected at one time.
In the flow of the file processing method according to another embodiment of the present application, step S3 includes:
substep B1: and receiving a touch input of curve sliding of the at least two target object identifications along the second preset direction under the condition that the at least two target object identifications are displayed along the second preset direction and the first object identification is arranged between the at least two target object identifications at intervals.
The method comprises the steps that a curve input in the curve sliding touch input comprises at least one circular arc, and the circular arc is used for bypassing a first object identifier at an interval between two target object identifiers which are not adjacent;
a curved sliding touch input is applied to each of the at least two target object identifications.
The curve-sliding touch input in this embodiment needs to be applied to each of the at least two target object identifiers, that is, the user needs to perform a total curve-sliding touch input on all the at least two target object identifiers.
In the application scenario of the embodiment, for example, a plurality of chat friends to be shared are displayed on a current chat interface, and a certain distance exists between the positions; in the application scenario of this embodiment, for example, a plurality of chat friends to be shared are not displayed on the current chat interface, and there is an interval in the positions after searching for the keyword.
Referring to fig. 5, for example, based on the application scenario, a user may use a friend 1, a friend 3, and a friend 5, which have a certain distance from each other, as target objects in the present embodiment.
Referring to the arrows and fingers shown in fig. 5, optionally, the curved sliding touch input of the present embodiment is as follows: the user presses the head photo frame of the friend 1 for a long time, selects the friend 1, drags the head photo frame around the head photo frame of the friend 2 to the head photo frame of the friend 3, presses the head photo frame of the friend 3 for a long time, selects the friend 3, drags the head photo frame around the head photo frame of the friend 4 to the head photo frame of the friend 5, presses the head photo frame of the friend 5 for a long time, selects the friend 5, and releases fingers.
After the user presses the head portrait frame of the friend for a long time, the head portrait frame of the friend can be highlighted in a preset mode to indicate that the friend has selected.
Referring to fig. 5, optionally, the second preset direction is a top-down direction.
Illustratively, based on the target object identifiers in this embodiment being displayed in the order from top to bottom, the overall direction of the curve input by the user is also along the direction from top to bottom, that is, although the curve includes a circular arc, the curve extends from top to bottom as a whole, so as to avoid the user from performing redundant input, and make the user input manner simple and convenient, so as to achieve the purpose of simplifying the user operation.
In the case that the first object identifier is spaced between any two target object identifiers, it is necessary to complete the input of an arc between any two target object identifiers to bypass the spaced first object identifiers, and the number of the first object identifiers included in any arc is not limited.
In the present embodiment, an applicable scenario of the present embodiment is shown. In the application scene, at least two target object identifications are displayed at intervals in the target objects to be selected by the user, so that the user can select the at least two target object identifications displayed at intervals at one time through touch input of curve sliding, and the purpose of simplifying the user is achieved. The user can plan the path of the input curve according to the overall trend of the second preset direction so as to ensure that the target object is selected at one time.
In a flow of a file processing method of another embodiment of the present application, an object identification includes an object avatar; step S3, including:
substep C1: a second input to the at least two target object avatars is received.
In embodiments of the present application, object identification is not limited to an avatar, a displayed bar, or the like.
Therefore, in the present embodiment, an input manner for the avatar of the target object is provided.
For example, the user may correspondingly lock one target object avatar based on different time points during the process of performing the second input, thereby completing the selection of multiple target objects.
Wherein the manner of locking is not limited to long press or the like.
Further, the user associates the locked target object avatars together with the specified gesture to complete the second input in this step.
In this embodiment, the target object avatar is used as the operation area, so that the user can conveniently and quickly identify the operation area, the user inputs the target object avatar, the selection function of the target object is activated, the selection of a plurality of target objects is quickly completed, and the sharing efficiency is improved.
In the flow of the file processing method according to another embodiment of the present application, step S3 includes:
substep D1: at least one of a drag input and a slide input to the at least two target object identifications is received.
In the embodiments of the present application, the specific input modes included in the second input include, but are not limited to: drag and slide.
Illustratively, the plurality of target object identifications are respectively displayed at intervals, and when the user selects the plurality of target object identifications, the user can drag the plurality of target object identifications in sequence without loosing hands to complete the selection of the target object.
Illustratively, a plurality of target object identifications are displayed adjacently, and when a user selects a plurality of target object identifications, the user can slide to the plurality of target object identifications in sequence without loosing hands to complete the selection of the target object.
For example, when a user selects a plurality of target object identifiers, the target object identifiers are displayed adjacently and also displayed at intervals, and the user can slide to one part of the target object identifiers and drag the other part of the target object identifiers according to the distribution condition of each target object identifier without loosing hands to finish the selection of the target object.
In this embodiment, according to the distribution situation of the multiple target object identifiers to be selected, different input modes can be adopted, and the selection of the multiple target object identifiers is completed through the second input. Under different conditions, the second input at least comprises any one of dragging input and sliding input so as to quickly finish the selection of a plurality of target objects, and the interactive mode is simple to operate, accords with the human gesture operation habit, and does not need to occupy more screen space.
In summary, the application provides a new interaction method for the user, the identifier of the chat friend to be shared is used as a carrier, and a specific interaction gesture is adopted to perform final file sharing selection, so that the user is prevented from frequently performing repeated sharing operation for multiple times, the energy consumption of the user is avoided, the time of the user is greatly saved, and the sharing efficiency is improved. Therefore, the file processing method achieves the purpose that a user can share files to a plurality of chat friends at the same time, and is a new and more convenient method for sharing files at the same time.
It should be noted that, in the file processing method provided in the embodiment of the present application, the execution main body may be a file processing apparatus, or a control module for executing the file processing method in the file processing apparatus. In the embodiment of the present application, a document processing apparatus executing a document processing method is taken as an example, and a document processing apparatus of the document processing method provided in the embodiment of the present application is described.
Fig. 6 shows a block diagram of a document processing apparatus according to another embodiment of the present application, including:
a first input receiving module 10, configured to receive a first input to a target file and a target application;
a first input response module 20, configured to display at least two object identifiers in the target application in response to a first input;
a second input receiving module 30, configured to receive a second input of at least two target object identifications;
a second input response module 40, configured to respond to a second input, and send the target file to the objects indicated by the at least two target object identifiers through the target application program, respectively;
wherein the second input acts on each of the at least two target object identifications.
Thus, in the embodiment of the present application, in the case where the target file and the target application are selected by the user through the first input, at least two object identifiers in the target application are displayed, and each object identifier is used for indicating a unique object. Further, the user may select the corresponding at least two target objects by performing a second input on the at least two target object identifications. Wherein the second input is applied to each of the at least two target object identifications, i.e. one input is made for all of the at least two target object identifications in total. For example, the finger is always in a state of pressing the screen until the user completes the selection of at least two target object identifications. Thus, in response to the second input, the target file is simultaneously sent to the object indicated by the at least two target object identifications selected by the user. Therefore, the method and the device are universal for single-person sharing scenes and multi-person sharing scenes, and a user can switch any scene at any time without selecting the single-person sharing scene or the multi-person sharing scene in advance; particularly, in a multi-user sharing scene, a user does not need to repeatedly share the operation of a single user or repeatedly select the operation of multiple users after selecting a multi-selection option. In summary, in the embodiment, when the user shares the file with the plurality of friends, the user operation is simplified, and the file sharing efficiency is improved.
Optionally, the second input receiving module 30 includes:
the linear input receiving unit is used for receiving touch input of sliding of the at least two target object identifications along the first preset direction under the condition that the at least two target object identifications are adjacently displayed along the first preset direction.
Optionally, the second input receiving module 30 includes:
the curve input receiving unit is used for receiving the touch input of curve sliding of the at least two target object identifications along the second preset direction under the condition that the at least two target object identifications are displayed along the second preset direction and the first object identification is arranged between the at least two target object identifications at intervals;
the curve input in the curve sliding touch input comprises at least one circular arc, and the circular arc is used for bypassing the first object identifier at the interval between the two non-adjacent target object identifiers.
Optionally, the object identification comprises an object avatar; a second input receiving module 30 comprising:
an avatar receiving unit to receive a second input of the avatars of the at least two target objects.
Optionally, the second input receiving module 30 includes:
the specific input receiving unit is used for receiving at least one of drag input and sliding input of at least two target object identifications.
The document processing apparatus in the embodiment of the present application may be an apparatus, and may also be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The file processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The file processing apparatus provided in the embodiment of the present application can implement each process implemented by the foregoing method embodiment, and is not described here again to avoid repetition.
Optionally, as shown in fig. 7, an electronic device 100 is further provided in this embodiment of the present application, and includes a processor 101, a memory 102, and a program or an instruction stored in the memory 102 and executable on the processor 101, where the program or the instruction is executed by the processor 101 to implement each process of any one of the above embodiments of the file processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, a processor 1010, and the like.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The user input unit 1007 is used for receiving a first input of a target file and a target application program; receiving a second input identifying at least two target objects; a processor 1010 configured to display at least two object identifications in the target application in response to the first input; responding to the second input, and respectively sending the target file to the objects indicated by the at least two target object identifications through the target application program; wherein the second input acts on each of the at least two target object identifications.
Thus, in the embodiment of the present application, in the case where the target file and the target application are selected by the user through the first input, at least two object identifiers in the target application are displayed, and each object identifier is used for indicating a unique object. Further, the user may select the corresponding at least two target objects by performing a second input on the at least two target object identifications. Wherein the second input is applied to each of the at least two target object identifications, i.e. one input is made for all of the at least two target object identifications in total. For example, the finger is always in a state of pressing the screen until the user completes the selection of at least two target object identifications. Thus, in response to the second input, the target file is simultaneously sent to the object indicated by the at least two target object identifications selected by the user. Therefore, the method and the device are universal for single-person sharing scenes and multi-person sharing scenes, and a user can switch any scene at any time without selecting the single-person sharing scene or the multi-person sharing scene in advance; particularly, in a multi-user sharing scene, a user does not need to repeatedly share the operation of a single user or repeatedly select the operation of multiple users after selecting a multi-selection option. In summary, in the embodiment, when the user shares the file with the plurality of friends, the user operation is simplified, and the file sharing efficiency is improved.
Optionally, the user input unit 1007 is further configured to receive a touch input of sliding at least two target object identifiers along a first preset direction if the at least two target object identifiers are adjacently displayed along the first preset direction.
Optionally, the user input unit 1007 is further configured to receive a touch input of curved sliding performed on at least two target object identifiers along a second preset direction when the at least two target object identifiers are displayed along the second preset direction and at least two target object identifiers are separated by a first object identifier; wherein a curve inputted in the curve-sliding touch input includes at least one circular arc for bypassing the first object identifier spaced between two non-adjacent target object identifiers.
Optionally, the object identification comprises an object avatar; the user input unit 1007 is further configured to receive a second input of the avatar of at least two target objects.
Optionally, the user input unit 1007 is further configured to receive at least one of a drag input and a slide input for at least two target object identifications.
In summary, the application provides a new interaction method for the user, the identifier of the chat friend to be shared is used as a carrier, and a specific interaction gesture is adopted to perform final file sharing selection, so that the user is prevented from frequently performing repeated sharing operation for multiple times, the energy consumption of the user is avoided, the time of the user is greatly saved, and the sharing efficiency is improved. Therefore, the file processing method achieves the purpose that a user can share files to a plurality of chat friends at the same time, and is a new and more convenient method for sharing files at the same time.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. Processor 1010 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of any one of the above embodiments of the file processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of any one of the above embodiments of the file processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method of file processing, the method comprising:
receiving a first input to a target file and a target application;
displaying at least two object identifications in the target application in response to the first input;
receiving a second input identifying at least two target objects;
responding to the second input, and respectively sending the target file to the objects indicated by the at least two target object identifications through the target application program;
wherein the second input acts on each of the at least two target object identifications.
2. The method of claim 1, wherein receiving a second input identifying at least two target objects comprises:
the method comprises the steps of receiving touch input for sliding at least two target object identifications along a first preset direction under the condition that the at least two target object identifications are adjacently displayed along the first preset direction.
3. The method of claim 1, wherein receiving a second input identifying at least two target objects comprises:
under the condition that at least two target object identifications are displayed along a second preset direction and a first object identification is arranged between the at least two target object identifications at intervals, receiving touch input for performing curve sliding on the at least two target object identifications along the second preset direction;
wherein a curve inputted in the curve-sliding touch input includes at least one circular arc for bypassing the first object identifier spaced between two non-adjacent target object identifiers.
4. The method of claim 1, wherein the object identification comprises an object avatar; the receiving a second input of at least two target object identifications, comprising:
a second input to the at least two target object avatars is received.
5. The method of claim 1, wherein receiving a second input identifying at least two target objects comprises:
at least one of a drag input and a slide input to the at least two target object identifications is received.
6. A document processing apparatus, characterized in that the apparatus comprises:
the first input receiving module is used for receiving first input of a target file and a target application program;
a first input response module for displaying at least two object identifications in the target application in response to the first input;
the second input receiving module is used for receiving second input of at least two target object identifications;
a second input response module, configured to respond to the second input, and send the target file to the objects indicated by the at least two target object identifiers through the target application program, respectively;
wherein the second input acts on each of the at least two target object identifications.
7. The apparatus of claim 6, wherein the second input receiving module comprises:
the device comprises a linear input receiving unit and a control unit, wherein the linear input receiving unit is used for receiving touch input of sliding of at least two target object identifications along a first preset direction under the condition that the at least two target object identifications are adjacently displayed along the first preset direction.
8. The apparatus of claim 6, wherein the second input receiving module comprises:
the system comprises a curve input receiving unit, a first display unit and a second display unit, wherein the curve input receiving unit is used for receiving the touch input of curve sliding of at least two target object identifications along a second preset direction under the condition that the at least two target object identifications are displayed along the second preset direction and a first object identification is arranged between the at least two target object identifications;
wherein a curve inputted in the curve-sliding touch input includes at least one circular arc for bypassing the first object identifier spaced between two non-adjacent target object identifiers.
9. The apparatus of claim 6, wherein the object identification comprises an object avatar; the second input receiving module comprises:
an avatar receiving unit to receive a second input of the avatars of the at least two target objects.
10. The apparatus of claim 6, wherein the second input receiving module comprises:
the specific input receiving unit is used for receiving at least one of drag input and sliding input of at least two target object identifications.
CN202110272343.3A 2021-03-12 2021-03-12 File processing method and device Pending CN112988009A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110272343.3A CN112988009A (en) 2021-03-12 2021-03-12 File processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110272343.3A CN112988009A (en) 2021-03-12 2021-03-12 File processing method and device

Publications (1)

Publication Number Publication Date
CN112988009A true CN112988009A (en) 2021-06-18

Family

ID=76335415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110272343.3A Pending CN112988009A (en) 2021-03-12 2021-03-12 File processing method and device

Country Status (1)

Country Link
CN (1) CN112988009A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140018661A (en) * 2012-08-03 2014-02-13 엘지전자 주식회사 Mobile terminal and method for controlling thereof
CN109687981A (en) * 2017-10-19 2019-04-26 阿里巴巴集团控股有限公司 A kind of group's method for building up and device
CN111061574A (en) * 2019-11-27 2020-04-24 维沃移动通信有限公司 Object sharing method and electronic equipment
CN112346629A (en) * 2020-10-13 2021-02-09 北京小米移动软件有限公司 Object selection method, object selection device, and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140018661A (en) * 2012-08-03 2014-02-13 엘지전자 주식회사 Mobile terminal and method for controlling thereof
CN109687981A (en) * 2017-10-19 2019-04-26 阿里巴巴集团控股有限公司 A kind of group's method for building up and device
CN111061574A (en) * 2019-11-27 2020-04-24 维沃移动通信有限公司 Object sharing method and electronic equipment
CN112346629A (en) * 2020-10-13 2021-02-09 北京小米移动软件有限公司 Object selection method, object selection device, and storage medium

Similar Documents

Publication Publication Date Title
CN112988006B (en) Display method, display device, electronic equipment and storage medium
CN113794795B (en) Information sharing method and device, electronic equipment and readable storage medium
CN112083854A (en) Application program running method and device
CN113179205A (en) Image sharing method and device and electronic equipment
CN112612391A (en) Message processing method and device and electronic equipment
CN114489418A (en) Message processing method, message processing device and electronic equipment
CN114443203A (en) Information display method and device, electronic equipment and readable storage medium
CN113590008A (en) Chat message display method and device and electronic equipment
CN113849092A (en) Content sharing method and device and electronic equipment
CN113703634A (en) Interface display method and device
CN113821288A (en) Information display method and device, electronic equipment and storage medium
CN113311973A (en) Recommendation method and device
CN113407075A (en) Icon sorting method and device and electronic equipment
CN112286615A (en) Information display method and device of application program
CN112596643A (en) Application icon management method and device
CN112181570A (en) Background task display method and device and electronic equipment
CN111638828A (en) Interface display method and device
CN113852540B (en) Information transmission method, information transmission device and electronic equipment
CN111796736B (en) Application sharing method and device and electronic equipment
CN112399010B (en) Page display method and device and electronic equipment
CN113805756A (en) Interface display method and device, electronic equipment and storage medium
CN114489414A (en) File processing method and device
CN113779293A (en) Image downloading method, device, electronic equipment and medium
CN112988009A (en) File processing method and device
CN112214297A (en) Application switching method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210618

WD01 Invention patent application deemed withdrawn after publication