CN111857467B - File processing method and electronic equipment - Google Patents

File processing method and electronic equipment Download PDF

Info

Publication number
CN111857467B
CN111857467B CN202010617768.9A CN202010617768A CN111857467B CN 111857467 B CN111857467 B CN 111857467B CN 202010617768 A CN202010617768 A CN 202010617768A CN 111857467 B CN111857467 B CN 111857467B
Authority
CN
China
Prior art keywords
target
information
social
file
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010617768.9A
Other languages
Chinese (zh)
Other versions
CN111857467A (en
Inventor
吴香礼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010617768.9A priority Critical patent/CN111857467B/en
Publication of CN111857467A publication Critical patent/CN111857467A/en
Application granted granted Critical
Publication of CN111857467B publication Critical patent/CN111857467B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application discloses a file processing method and electronic equipment, belongs to the technical field of electronics, and aims to solve the problem that when a user views social information of a file, the social information is dispersed, so that operation is complex. Wherein the method comprises the following steps: under the condition that a social object of a target social behavior is a target file, acquiring target social information of the target social behavior; storing the target social information and the target identification in an associated manner; the target identification is preset target identification associated with the target file; the target social information includes at least one of: target chat information, target social circle social information, target voice conversation information and environment information of a target social behavior place. The file processing method is applied to the electronic equipment.

Description

File processing method and electronic equipment
Technical Field
The application belongs to the technical field of electronics, and particularly relates to a file processing method and electronic equipment.
Background
Currently, electronic devices have become an important tool for people to socialize. People communicate on social software through electronic devices; carrying out face-to-face communication by using pictures, videos, characters and the like on the electronic equipment; and so on.
Among them, for any file, such as multimedia files like pictures, videos, etc., users often send and receive in chats, show around friends, discuss with friends face to face, etc., to share mood, understanding, etc., for this file. Therefore, there will be some social information such as comments, discussions, etc. for this document. When the user views the social information of the file, the user needs to view chat records, view friend circles, recall and the like through various ways due to the fact that the social information is dispersed.
Therefore, in the process of implementing the present application, the inventors found that at least the following problems exist in the prior art: when a user views the social information of a file, the operation is complicated due to the fact that the social information is dispersed.
Disclosure of Invention
The embodiment of the application aims to provide a file processing method, which can solve the problem that when a user views social information of a file, the social information is dispersed, so that the operation is complicated.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a file processing method, where the method includes: under the condition that a social object of a target social behavior is a target file, acquiring target social information of the target social behavior; the storage module is used for storing the target social information and the target identification in an associated manner; the target identification is preset target identification associated with the target file; the target social information includes at least one of: target chat information, target social circle social information, target voice conversation information and environment information of a target social behavior place.
In a second aspect, an embodiment of the present application provides a document processing apparatus, including: the acquisition module is used for acquiring target social information of the target social behavior under the condition that a social object of the target social behavior is a target file; the storage module is used for storing the target social information and the target identification in an associated manner; the target identification is preset target identification associated with the target file; the target social information includes at least one of: target chat information, target social circle social information, target voice conversation information and environment information of a target social behavior place.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, after the target file is acquired, a target identifier corresponding to the target file is generated for the target file to serve as the unique identifier of the target file. Further, target social information associated with the target file is obtained, such as chat content of the user on the target file, comment content of the user on the target file, and the like in the social process, so that the target social information is stored based on the target identifier, and the corresponding target file and the target identifier can be determined according to the target identifier. Therefore, when the user views the target social information of the target file, the user can intensively acquire all contents of the target social information associated with the target file according to the target identification, and the problem that the operation is complicated due to the fact that the social information is dispersed when the user views the social information of the file can be solved.
Drawings
FIG. 1 is a flowchart of a document processing method according to an embodiment of the present application;
FIG. 2 is a second flowchart of a document processing method according to an embodiment of the present application;
FIG. 3 is a third flowchart of a document processing method according to an embodiment of the present application;
FIG. 4 is a fourth flowchart of a document processing method according to an embodiment of the present application;
FIG. 5 is a fifth flowchart of a document processing method according to an embodiment of the present application;
FIG. 6 is a sixth flowchart of a document processing method according to an embodiment of the present application;
FIG. 7 is a schematic output diagram of a document processing method according to an embodiment of the present application;
FIG. 8 is a seventh flowchart of a document processing method according to an embodiment of the present application;
FIG. 9 is an eighth flowchart of a document processing method according to an embodiment of the present application;
FIG. 10 is a ninth flowchart of a document processing method according to an embodiment of the present application;
FIG. 11 is a tenth flowchart of a document processing method of an embodiment of the present application;
FIG. 12 is a block diagram of a document processing apparatus according to an embodiment of the present application;
fig. 13 is a hardware configuration diagram of an electronic device according to an embodiment of the present application.
Fig. 14 is a second hardware configuration diagram of the electronic device according to the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes the document processing method provided by the embodiment of the present application in detail through a specific embodiment and an application scenario thereof with reference to the accompanying drawings.
Fig. 1 shows a flowchart of a file processing method according to an embodiment of the present application, including:
step 110: and under the condition that the social object of the target social behavior is the target file, acquiring the target social information of the target social behavior.
In this embodiment, the target social behavior refers to any behavior initiated based on social contact in the device, for example, the target social behavior is: a behavior of sending a file in a chat interface; as another example, the target social behavior is: in the social circle interface, sharing the behavior of the file; as another example, the target social behavior is: in the target display screen, the behavior of communicating the files face to face is performed based on the displayed files.
A social object refers to an object targeted by a social behavior, e.g., a file sent in a chat interface; as another example, in a social circle interface, shared files; as another example, in the target display screen, the files are displayed in face-to-face communication.
Wherein the type of the target file comprises at least one of video, picture, photo, cartoon, document, link, program, etc.
Optionally, before step 110, the following steps are further included:
and under the condition that the target social behavior is detected, acquiring a social object of the target social behavior.
Correspondingly, in case the social object is a target file, step 110 is performed.
The number of detected target social behaviors may be one or more, and the embodiment is not limited. Step 110 is performed whenever the social object of the target social activity is the target file.
The target social information of the target social behavior refers to associated information generated based on the target social behavior, and the target social information includes at least one of the following: target chat information, target social circle social information, target voice conversation information, environment information of a target social behavior place, and a target social principal in the target social behavior.
For example, in a chat, based on social behavior of a sent file, an acquired chat record associated with the file is used as target chat information; for another example, in a social circle, based on the social behavior of the shared file, the obtained comment content, praise list and the like associated with the file are used as the social information of the target social circle; for another example, based on social behavior of the face-to-face communication file, the obtained voice conversation associated with the file is used as target voice conversation information; for another example, based on the social behavior of the face-to-face communication file, the obtained environment of the current place is used as the environment information of the target social behavior place; as another example, based on the chat objects in the above examples, friends commenting on social circles, friends praise in social circles, and both parties of face-to-face communication, the target social subjects are selected.
Optionally, the target file is a social object of the target social behavior in this step, which may be generated by any tool in media such as a camera, or programs such as a social program.
For example, a photo taken by the user using a camera can be used as the target file in the step; as another example, a video sent by the user in the social software may be the target file in this step.
Optionally, the target social behavior of the target file is monitored in real time to obtain target social information of the target social behavior.
Step 120: and storing the target social information and the target identification in an associated manner.
The target identification is preset target identification associated with the target file.
Alternatively, the destination identifier is a unique code (ID) generated based on the destination file, and the destination identifier may be a combination of numbers and letters. The object identifier is used to identify the object file.
Alternatively, the target identifier may be written to an extension area of the target file, which may be stored in association with the target file.
The target identification of the target file is preset, and then the target social information of the target social behavior is obtained, so that the obtained target social information and the target identification are stored in an associated mode.
Optionally, the target social information is first associated with the target identifier, the association method is not limited to binding the target social information and the target identifier, and then the target social information is stored, so that the target social information can be searched in a centralized manner through the target identifier, and a user can view the target social information in a centralized manner.
Optionally, the target social information is stored in the cloud server based on the target identifier, and the user can obtain the target social information through the cloud server in different devices, so that the purpose of centralized viewing is achieved.
Optionally, the target social information is stored in the local server based on the target identifier, and the user can quickly obtain the target social information through the local server in the local device, so that the purpose of centralized viewing is achieved.
Optionally, the target social information is stored in the cloud server based on the target identifier, and the user can obtain the target social information through the cloud server in different devices, so that the purpose of centralized viewing is achieved.
Optionally, based on the target identification, the target social information is stored in more storage spaces, so as to achieve the purpose of centralized viewing.
In the embodiment of the application, when a targeted social object in the monitored target social behaviors is a target file, target social information generated based on the target social behaviors is acquired, and the target social information comprises at least one of target chat information generated based on the chat social behaviors, target social circle social information generated based on the social behaviors of a social circle, target voice conversation information generated based on the social behaviors of face-to-face communication, environment information of a target social behavior occurrence place generated based on the social behaviors of face-to-face communication, and the like. Further, under the condition that the unique identification of the target file, namely the target identification, is preset, the obtained target social information is stored in association with the target identification. Therefore, all social information related to the target file can be obtained according to the target identification, all social information generated by the social behavior of the target file can be obtained, and the problem that when a user views the social information of the file, the operation is complex due to the fact that the social information is scattered is solved.
On the basis of fig. 1, fig. 2 shows a flowchart of a file processing method according to another embodiment of the present application, where the target social behavior includes file sending and chat, and correspondingly, before "obtaining target social information of the target social behavior" in step 110, the method further includes:
step 130: a first input to a target file by a user is received.
The first input is used for the target social behavior of the user for carrying out file sending on the target file. The implementation form of the first input is not limited to a touch action, an empty action, and the like; not limited to gesture motions, facial motions, etc.; not limited to one action, multiple actions. Also, when the first input includes a plurality of actions, the plurality of actions may be continuous or intermittent.
Optionally, the first input occurs in a first social program. Further, for the case where the target social behavior includes file sending and chatting, the first social program is preferably a chat program.
In addition, in more embodiments, the target social behaviors include file reception, file collection, and the like, and chat, and correspondingly, the first input is used for the target social behaviors of "file reception", "file collection", and the like of the user with respect to the target file.
Step 140: and responding to the first input, and sending the target file to the target social principal through the first social program.
Optionally, the target social agent is a chat object in the first social program, such as a friend of the user, a group friend of the user, and the like.
Correspondingly, step 110 comprises:
step 1101: and acquiring target chat information associated with the target file in the program interface of the first social program.
The target social information comprises target chat information and a target social agent.
The application scenes are as follows: the program interface of the first social program is a chat window, and the target social agent is a friend A. The user sends a picture to friend A in the chat window and sends chat information, namely that the person in the picture is very fun, and friend A replies to the picture that the person is really fun after receiving the picture. In this way, the user sends a picture to the friend in the chat window corresponding to the first input in the embodiment, so that the picture is sent to the friend a in response to the first input, and further, chat information associated with the picture is obtained in the chat window, such as "people in the picture are very funny", "are really very funny", and the like, and is taken as target chat information; friend a is taken as the target social principal.
Specifically, when social behaviors sent by a file of a user are monitored in a chat window, a social object sent by the file is detected, a context recognition algorithm is started under the condition that the social object is a target file and a corresponding target identifier is identified in the target file, and the chat social behaviors are monitored in the chat window in real time so as to further extract all chat information associated with the chat social behaviors.
Optionally, through semantic understanding, the chat information associated with the target file is extracted, and the chat information not associated with the target file is eliminated, so that the extracted chat information is used as the target chat information and can be further used as a part of the target social information.
In this embodiment, for a scenario in which a user refers to a target file in a chat between a social program and another user, when the target file corresponds to a target identifier, chat information associated with the target file may be extracted from a chat interface as the target chat information, so that the target chat information is stored in association with the target identifier as a part of the target social information. In this way, the user can collectively view chat information about the target file.
On the basis of fig. 1, fig. 3 shows a flowchart of a file processing method according to another embodiment of the present application, where the target social behaviors include file sharing and social circles, and correspondingly, before "acquiring target social information of the target social behaviors" in step 110, the method further includes:
step 150: and receiving a second input of the target file from the user.
And the second input is used for the target social behavior of the user for carrying out file sharing on the target file. The implementation form of the second input is not limited to a touch action, an empty action, and the like; not limited to gesture motions, facial motions, etc.; not limited to one action, multiple actions. Also, when the second input includes multiple actions, the multiple actions may be continuous or intermittent.
Optionally, the second input occurs in a second social program. Further, for the case that the target social behavior includes file sharing and social circle socialization, the second social program is preferably a social circle program. The file sharing comprises the action of a user for sharing the file in the social circle, and correspondingly, the social circle comprises the actions of praise, comment, sharing and the like for the shared file detected in the social circle.
Step 160: in response to a second input, the target file is shared to a social circle of a second social program.
The application scenario is that the user selects a video to be published to a friend circle of a chat software. The video is the target file in the step, and the friend circle is the social circle of the second social program in the step.
Correspondingly, step 110 comprises:
step 1102: and acquiring target social circle social information associated with the target file in the social circle.
Wherein the target social information comprises: the target social circle social information and the target social subjects corresponding to the target social circle social information.
The target social circle social information includes at least one of: like, share, comment.
The application scenario is that a user selects a section of video to be published to a friend circle, friends in the friend circle approve under the video, friends in the friend circle comment under the video, friends in the friend circle share the video to the friend circle of the friends, friends in the friend circle share the video to other users, and the like. The behaviors are social behaviors of the social circle, and the information generated based on the social behaviors of the social circle is social information of the social circle.
Optionally, based on the like behavior of the friend, the like list can be used as the social information of the target social circle in the embodiment; based on the comment behavior of the friend, the comment list can be used as the social information of the target social circle in the embodiment; based on the sharing behavior of the friend, the sharing record can be used as the social information of the target social circle in the embodiment.
Further, the friends who have taken the praise action, the comment action, and the share action are taken as the target social subjects in this embodiment.
Optionally, social circle social behaviors may also be described, and the description content is used as the target social circle social information in this embodiment. For example, the target social circle social information includes: "something likes" and "something says xxx".
Specifically, when the social behavior of the file sharing of the user is monitored in the social circle, the social object of the file sharing is detected, and when the social object is a target file and a corresponding target identifier is identified in the target file, the social behavior of the target social circle is continuously monitored in the social circle, so that the target social information of the social behavior of the social circle based on the target social circle is further obtained.
Wherein the target social circle social information may be included as part of the target social information.
In this embodiment, for a scenario in which a user shares a target file in a social circle of a social program, under the condition that the target file corresponds to a target identifier, a user of the social program based on the user and friends in the social circle may be obtained, and a social behavior of the social circle of the target file includes at least one of likes, comments, shares, and the like, so that target social information of the social circle generated based on the social behavior of the social circle is used as a part of the target social information and is stored in association with the target identifier. In this way, the user may focus on viewing social information about the social circle of the target file.
On the basis of fig. 1, fig. 4 shows a flowchart of a document processing method according to another embodiment of the present application, where the target social behavior includes a face-to-face conversation related to the document, and correspondingly, before "obtaining the target social information of the target social behavior" in step 110, the method further includes:
step 170: and under the condition that the target file is displayed on the target display screen, starting a microphone to collect voice information.
The embodiment is suitable for face-to-face social scenes.
Alternatively, in a face-to-face social scenario, the microphone may be turned on by manual input with the target file displayed on the target display screen.
The target display screen is used for displaying a target file. Frequently, the target display screen can display a display interface of the album program so as to display pictures, videos and the like in the album program; the target display screen may display a display interface of the chat program for displaying pictures, videos, etc. involved in the chat.
Correspondingly, step 110 comprises:
step 1103: and extracting target voice dialogue information related to the target file from the voice information collected by the microphone.
Step 1104: and acquiring environmental information of the target social behavior place.
The target social contact information comprises target voice conversation information, a target social contact main body corresponding to the target voice conversation information and environmental information of a target social behavior place; the environmental information includes at least one of: weather, geographic location, time.
In the step, after a microphone tool is started, the microphone collects voice information in a scene in real time, so that target voice conversation information which is related to a target file in the collected voice information is obtained; meanwhile, recording the environmental information in the current scene, including: at least one of weather, time, geographic location, etc.
In an application scenario, for example, a user takes a picture as a target file during a trip and previews the target file on a target display screen. Meanwhile, the user shares the target file to friends in the same row to enjoy and communicate together. At this time, the microphone can be manually opened to collect voice information communicated by a plurality of users, so that target voice conversation information associated with the target file can be further acquired through voice recognition as part of target social information of the target file. Meanwhile, the current environment information of the place of occurrence such as the geographic position, the time, the weather and the like is correspondingly acquired through a positioning tool, a time tool, a weather tool and the like respectively and is used as a part of the target social information of the target file. Further, a plurality of users in the communication are used as target social subjects, namely, as part of target social information of the target files.
Specifically, when the social behavior of a user in a face-to-face conversation is monitored on a target display screen, detecting a social object displayed on the target display screen, starting a microphone to collect voice information under the condition that the social object is a target file and a corresponding target identifier is identified in the target file, and identifying target voice conversation information associated with the target file in the collected voice information; meanwhile, the environmental information of the social behavior occurrence place in the face-to-face conversation is acquired as the environmental information of the target social behavior occurrence place. So that the target voice conversation information and the environmental information of the target social behavior venue can be taken as a part of the target social information.
In this embodiment, for a scenario in which a user talks about a target file displayed on a target display screen face to face, when the target file corresponds to a target identifier, voice information communicated between the user and surrounding users may be collected to extract target voice conversation information associated with the target file, and further, in combination with environment information of a target social behavior occurrence place, the target voice conversation information and the environment information of the target social behavior occurrence place are stored in association with the target identifier as a part of the target social behavior information. In this way, the user can collectively view the voice conversation information about the target file, and the environment information of the place where the voice conversation occurs. The voice conversation information is combined with the environment information, so that the current situation can be restored, and the user can be helped to remember the current emotion and the like, so that the social information is more vivid, vivid and humanized.
On the basis of fig. 4, fig. 5 shows a flowchart of a file processing method according to another embodiment of the present application, and the target social information further includes target text dialogue information corresponding to the target voice dialogue information. Correspondingly, after the step 1103, the method further includes:
step 1105: the target voice dialog information is converted into target text dialog information.
In this step, the collected target voice dialog information may be converted into a corresponding text form using a voice recognition technique.
Further, the target voice dialog information in the text form may be converted into a corresponding dialog form, i.e., target text dialog information, through semantic understanding.
Optionally, the target voice conversation information, the environment information of the target social behavior venue, and the target text conversation information in this embodiment are all included in the target social information.
In the embodiment, the voice content in the face-to-face conversation scene is displayed in the form of text conversation, so that the user can quickly browse the conversation content when viewing the conversation, and the conversation is clear at a glance, and the reading experience of the user is met.
Optionally, all the obtained social information associated with the target file may be integrated to form the target social information.
On the basis of fig. 1, fig. 6 shows a flowchart of a document processing method according to another embodiment of the present application, and after step 120, the method further includes:
step 180: and receiving a third input of the target file from the user.
Step 190: in response to the third input, outputting the target social information.
And the third input is used for the user to input such as browsing, opening, playing, viewing and the like aiming at the target file. The implementation form of the third input is not limited to a touch action, an empty action, and the like; not limited to gesture motions, facial motions, etc.; not limited to one action, multiple actions. Also, when the third input includes a plurality of actions, the plurality of actions may be continuous or intermittent.
In this step, when the user performs the third input on the target file, the target identifier may be obtained, so that the target social information associated with the target file is output according to the target identifier.
Alternatively, the target social information may be output after simple processing of the acquired target social information.
Alternatively, the target social information may be output in the form of a bullet screen, and so on.
Illustratively, in the interface shown in fig. 7, after the user makes a third input to the target file 1, the target file 1 is displayed in a full screen, and at the same time, the target social information is displayed in a floating manner on the target file 1.
The target social information can be displayed respectively according to the information types contained in the target social information.
For example, the target file is displayed in a full screen mode, the target chat information is displayed in a first area of the target file in a floating mode, the target social circle social information is displayed in a second area of the target file in a floating mode, the target text conversation information and the environment information of the target social behavior occurrence place are displayed in a floating mode in a third area of the target file, and the like.
For another example, according to the information types contained in the target social information, the information of each type is sequentially output in the form of a bullet screen.
For another example, the target social information is displayed in a partitioned or time-sharing manner according to different obtaining ways of the target social information.
In this embodiment, when the user inputs the target social information of the target file, such as viewing, browsing, opening, and viewing the target social information of the target file, all social information associated with the target file may be output in a centralized manner through the target identifier of the target file, so that the user can view the social information. Meanwhile, the social information can be output in an applicable mode according to the type of the target file, and the use experience of a user is met.
On the basis of fig. 6, fig. 8 shows a flowchart of a file processing method according to another embodiment of the present application, where a target file includes a target voice dialog information identifier, and correspondingly, step 180 includes:
step 1801: a third input is received from the user identifying the target voice dialog information.
In this step, a target voice dialogue information identifier, such as a sound playing icon, may be added in the setting of the target social information, and the user may play the current target voice dialogue information directly by using the contact information.
Correspondingly, step 190 comprises:
step 1901: and playing the target voice dialogue information indicated by the target voice dialogue information identification.
Referring to fig. 7, in an application scenario, if it is recognized that the target social information is the target voice conversation information, the information displays a sound playing icon 2, and the user clicks the sound playing icon 2 through a third input, so that the target voice conversation information can be directly played in a voice form.
In this embodiment, a method for outputting target voice dialog information is provided, and a user can intuitively listen to the target voice dialog information through inputting a target voice dialog information identifier to restore a social scene.
On the basis of fig. 6, fig. 9 shows a flowchart of a file processing method according to another embodiment of the present application, where an additional information control is included in a target file, and correspondingly, step 180 includes:
step 1802: a third input to the additional information control by the user is received.
In this step, an additional information control can be added in the setting of the target social information, and the additional information control is used for playing the additional information of the occurrence of the social behavior, such as the environment information of the occurrence place of the social behavior, by directly connecting the user with the user.
Correspondingly, step 190 comprises:
step 1902: and playing the environmental information associated with the additional information control.
If the application scenario is that the target social information is identified as the additional information, the entry mark social information displays an additional information control, such as an environmental information button, and the user direct contact can play the environmental information such as the weather condition, the geographic position, the time and the like of the social contact.
The environment information associated with the additional information control is preferably the environment information of the target social behavior place so as to restore the social scene.
Different from other directly acquired social information, the method can save the original information format, has diversified existing forms of the environmental information, and can present the environmental information to the user in a unified playing form, so that the user can intensively acquire the environmental information associated with the social behaviors.
In this embodiment, a method for outputting additional information in social contact is provided, where the additional information is, for example, environmental information, and a user can intuitively listen to the environmental information associated with the additional information control through inputting the additional information control, so as to help the user restore a social scene, so that the social information is richer.
On the basis of fig. 1, fig. 10 shows a flowchart of a document processing method according to another embodiment of the present application, and after step 120, the method further includes:
step 200: and acquiring a search keyword input by a user.
Alternatively, the user may enter a search keyword in the search interface.
Alternatively, the search keyword may be a social principal, chat key content, social circle key content, communication key content, time, place, weather condition, and the like.
Step 210: and in the case that the search keyword is matched with at least one item of information in the target social information, outputting the target file and the target social information.
For example, filtering the geographic location "college library" as a search keyword may filter out all files that contain "college library" in social information.
In an application scenario of this embodiment, for example, the search keyword is "college library", and if the target social information includes "college library", the target file may be determined, and further, all contents of the target social information and the target file may be output according to the target identifier of the target file.
In this embodiment, a function for searching a target file and target social information is provided, and the target file may be filtered according to a social object, a time of a social place, a place of the social place, a weather of the social place, and the like, so as to achieve a purpose of quickly searching the target file and the target social information. Therefore, in the embodiment, the user can quickly retrieve the target file and the target social information by simply memorizing the social behaviors of the target file, the target social information is intensively viewed, and meanwhile, a quick retrieval method is provided, so that the user operation can be simplified in various scenes.
On the basis of fig. 1, fig. 11 shows a flowchart of a document processing method according to another embodiment of the present application, and before step 120, the method further includes:
step 220: and generating a target identifier based on the target file.
Optionally, the target identification is generated while the target file is being produced.
Optionally, the target identifier is generated when the social behavior of the target file is detected.
And in an application scene, for example, after the user shoots a picture, the picture is output, and meanwhile, the identifier corresponding to the picture is generated.
In another example, when the user sends a picture in the dialog window, the application generates an identifier corresponding to the picture.
The step 220 may precede the step 110 or follow the step 110, and the embodiment is not limited thereto.
Step 230: and writing the target identification into an expansion area of the target file.
In this embodiment, a unique target identifier may be generated for the target file, and then the target identifier is written into the extension area of the target file to serve as the unique identifier of the target file, which may be used to identify the target file.
Specifically, the extended area is a data block added to the content of the target file, so that the generated target identifier can be written into the data block as a unique identifier of the target file. Wherein, the target identification can be combined by numbers and letters.
In summary, the present application aims to provide a method for displaying a document and social information in an associated manner. Specifically, the file can be associated with the social information through the unique identification of the file, so that the file can be displayed to the user in a centralized mode. In one aspect, some software social information generated based on files can be centrally displayed; on the other hand, some face-to-face communication social information generated based on the file can be collectively shown. In the latter case, the voice in the social records occurring at the moment can be converted into text, and the text is converted into the conversation content through the voice analysis technology to form the social information. Therefore, when the user or the friend views the file again, all social information is lifted through the unique identification of the file and is respectively output in a bullet screen-like mode to be displayed to the user. In this way, the user can view all social information that occurs because of the file at the same time when viewing the file. The method and the device can realize the combination of the file elements and the social information, and enrich the file browsing forms.
It should be noted that, in the file processing method provided in the embodiment of the present application, the execution main body may be a file processing apparatus, or a control module for executing the file processing method in the file processing apparatus. In the embodiment of the present application, a file processing apparatus executes a file processing method as an example, and an apparatus of the file processing method provided in the embodiment of the present application is described.
Fig. 12 is a block diagram showing a document processing apparatus according to another embodiment of the present application, including:
the obtaining module 10 is configured to obtain target social information of a target social behavior when a social object of the target social behavior is a target file;
the storage module 20 is used for storing the target social information and the target identification in an associated manner;
the target identification is preset target identification associated with the target file; the target social information includes at least one of: target chat information, target social circle social information, target voice conversation information and environment information of a target social behavior place.
In the embodiment of the application, when a targeted social object in the monitored target social behaviors is a target file, target social information generated based on the target social behaviors is acquired, and the target social information comprises at least one of target chat information generated based on the chat social behaviors, target social circle social information generated based on the social behaviors of a social circle, target voice conversation information generated based on the social behaviors of face-to-face communication, environment information of a target social behavior occurrence place generated based on the social behaviors of face-to-face communication, and the like. Further, under the condition that the unique identification of the target file, namely the target identification, is preset, the obtained target social information is stored in association with the target identification. Therefore, all social information related to the target file can be obtained according to the target identification, all social information generated by the social behavior of the target file can be obtained, and the problem that when a user views the social information of the file, the operation is complex due to the fact that the social information is scattered is solved.
Optionally, the target social behavior comprises file sending and chatting;
the document processing apparatus further includes:
the first input receiving module is used for receiving first input of a user to the target file;
the first input response module is used for responding to the first input and sending the target file to the target social main body through the first social program;
acquisition module 10, comprising:
the chat information acquisition unit is used for acquiring target chat information associated with the target file in a program interface of the first social program;
wherein the target social information comprises: target chat information, target social agents.
Optionally, the target social behavior includes file sharing and social circle socialization;
the document processing apparatus further includes:
the second input receiving module is used for receiving second input of the user to the target file;
the second input response module is used for responding to second input and sharing the target file to a social circle of a second social program;
acquisition module 10, comprising:
the social circle social information acquisition unit is used for acquiring target social circle social information which is associated with the target file in the social circle;
wherein the target social information comprises: the target social circle social information and a target social agent corresponding to the target social circle social information; the target social circle social information includes at least one of: like, share, comment.
Optionally, the target social behavior comprises a face-to-face conversation related to the document;
the document processing apparatus further includes:
the voice information acquisition module is used for starting a microphone to acquire voice information under the condition that the target file is displayed on the target display screen;
acquisition module 10, comprising:
the dialogue information extraction unit is used for extracting target voice dialogue information related to the target file from the voice information collected by the microphone;
the social behavior generating unit is used for generating social behaviors according to the social behaviors;
wherein the target social information comprises: the target voice conversation information, a target social contact main body corresponding to the target voice conversation information and environment information of a target social contact action place; the environmental information includes at least one of: weather, geographic location, time.
Optionally, the target social information further includes target text conversation information corresponding to the target voice conversation information;
the acquisition module 10 further includes:
and the dialogue information conversion unit is used for converting the target voice dialogue information into target text dialogue information.
Optionally, the document processing apparatus further includes:
the third input receiving module is used for receiving third input of the target file by the user;
and the third input response module is used for responding to the third input and outputting the target social information.
Optionally, a target voice conversation information identifier is included on the target file;
a third input receiving module comprising:
the identification input unit is used for receiving a third input of the target voice conversation information identification by the user;
a third input response module comprising:
and the first playing unit is used for playing the target voice dialogue information indicated by the target voice dialogue information identifier.
Optionally, an additional information control is included on the target file;
a third input receiving module comprising:
the control input unit is used for receiving third input of the additional information control by a user;
a third input response module comprising:
and the second playing unit is used for playing the environment information associated with the additional information control.
Optionally, the document processing apparatus further includes:
the keyword acquisition module is used for acquiring search keywords input by a user;
and the output module is used for outputting the target file and the target social information under the condition that the search keyword is matched with at least one item of information in the target social information.
Optionally, the document processing apparatus further includes:
the identification generation module is used for generating a target identification based on the target file;
and the identification writing module is used for writing the target identification into the expansion area of the target file.
The document processing apparatus in the embodiment of the present application may be an apparatus, and may also be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The file processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The file processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments in fig. 1 to fig. 11, and is not described here again to avoid repetition.
Optionally, as shown in fig. 13, an electronic device 100 is further provided in this embodiment of the present application, and includes a processor 101, a memory 102, and a program or an instruction stored in the memory 102 and executable on the processor 101, where the program or the instruction is executed by the processor 101 to implement each process of the foregoing file processing method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 14 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 14 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The processor 1010 is configured to, in a case that a social object of a target social behavior is a target file, obtain target social information of the target social behavior;
a memory 1009 for storing the target social information and the target identification in association.
In the embodiment of the application, when a targeted social object in the monitored target social behaviors is a target file, target social information generated based on the target social behaviors is acquired, and the target social information comprises at least one of target chat information generated based on the chat social behaviors, target social circle social information generated based on the social behaviors of a social circle, target voice conversation information generated based on the social behaviors of face-to-face communication, environment information of a target social behavior occurrence place generated based on the social behaviors of face-to-face communication, and the like. Further, under the condition that the unique identification of the target file, namely the target identification, is preset, the obtained target social information is stored in association with the target identification. Therefore, all social information related to the target file can be obtained according to the target identification, all social information generated by the social behavior of the target file can be obtained, and the problem that when a user views the social information of the file, the operation is complex due to the fact that the social information is scattered is solved.
Optionally, the target social behavior comprises file sending and chatting;
before the target social information of the target social behavior is obtained, a user input unit 1007 is used for receiving a first input of the target file by a user;
a processor 1010, further configured to send, in response to the first input, the target file to a target social principal through a first social program; and acquiring target chat information associated with the target file in the program interface of the first social program. Wherein the target social information comprises: the target chat information, the target social agent.
Optionally, the target social behavior includes file sharing and social circle socialization;
before the target social information of the target social behavior is obtained, the user input unit 1007 is further configured to receive a second input of the target file by the user;
a processor 1010, further configured to share the target file to a social circle of a second social program in response to the second input; and acquiring target social circle social information associated with the target file in the social circle. Wherein the target social information comprises: the target social circle social information and a target social principal corresponding to the target social circle social information; the target social circle social information comprises at least one of: like, share, comment.
Optionally, the target social behavior comprises a face-to-face conversation related to a document;
before the target social information of the target social behavior is obtained, an input unit 1004 is used for starting a microphone to collect voice information under the condition that the target file is displayed on a target display screen;
the processor 1010 is further configured to extract target voice dialog information associated with the target file from the voice information collected by the microphone; and acquiring the environmental information of the target social behavior place. Wherein the target social information comprises: the target voice conversation information, a target social main body corresponding to the target voice conversation information and environment information of a target social behavior occurrence place; the environmental information includes at least one of: weather, geographic location, time.
Optionally, the target social information further includes target text conversation information corresponding to the target voice conversation information;
after extracting the target voice dialog information associated with the target file from the voice information collected by the microphone, the processor 1010 is further configured to convert the target voice dialog information into target text dialog information.
Optionally, after the target social information and the target identifier are stored in association, the user input unit 1007 is further configured to receive a third input of the target file from the user;
a processor 1010 further configured to output the target social information in response to the third input.
Optionally, a target voice conversation information identifier is included on the target file;
a user input unit 1007, configured to receive a third input of the target voice dialog information identifier by the user;
and the audio output unit 1003 is configured to play the target voice dialog information indicated by the target voice dialog information identifier.
Optionally, an additional information control is included on the target file;
a user input unit 1007, configured to receive a third input to the additional information control by the user;
the audio output unit 1003 is further configured to play the environment information associated with the additional information control.
Optionally, after the target social information and the target identifier are stored in association, the processor 1010 is further configured to obtain a search keyword input by a user; and outputting the target file and the target social information under the condition that the search keyword is matched with at least one item of information in the target social information.
Optionally, before the associating and storing the target social information and the target identifier, the processor 1010 is further configured to generate a target identifier based on the target file; and writing the target identification into an expansion area of the target file.
The application aims to provide a method for displaying files and social information in an associated mode. Specifically, the file can be associated with the social information through the unique identification of the file, so that the file can be displayed to the user in a centralized mode. In one aspect, some software social information generated based on files can be centrally displayed; on the other hand, some face-to-face communication social information generated based on the file can be collectively shown. In the latter case, the voice in the social records occurring at the moment can be converted into text, and the text is converted into the conversation content through the voice analysis technology to form the social information. Therefore, when the user or the friend views the file again, all social information is lifted through the unique identification of the file and is respectively output in a bullet screen-like mode to be displayed to the user. In this way, the user can view all social information that occurs because of the file at the same time when viewing the file. The method and the device can realize the combination of the file elements and the social information, and enrich the file browsing forms.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 1041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. Processor 1010 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the file processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the file processing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. A file processing method, comprising:
under the condition that a social object of a target social behavior is a target file, acquiring target social information of the target social behavior;
storing the target social information and the target identification in an associated manner;
the target identification is preset target identification associated with the target file; the target social information includes at least one of: target chat information, target social circle social information, target voice conversation information and environment information of a target social behavior place;
the target social behavior comprises a face-to-face conversation related to a document;
before the obtaining of the target social information of the target social behavior, the method further includes:
under the condition that the target file is displayed on a target display screen, starting a microphone to collect voice information;
the obtaining of the target social information of the target social behavior includes:
extracting target voice dialogue information related to the target file from the voice information collected by the microphone;
acquiring environmental information of the target social behavior place;
wherein the target social information comprises: the target voice conversation information, a target social main body corresponding to the target voice conversation information and environment information of a target social behavior occurrence place; the environmental information includes at least one of: weather, geographic location, time.
2. The method of claim 1, wherein the target social behavior comprises file sending and chat;
before the obtaining of the target social information of the target social behavior, the method further includes:
receiving a first input of the target file by a user;
in response to the first input, sending the target file to a target social principal through a first social program;
the obtaining of the target social information of the target social behavior includes:
acquiring target chat information associated with the target file in a program interface of the first social program;
wherein the target social information comprises: the target chat information, the target social agent.
3. The method of claim 1, wherein the target social behavior comprises file sharing and social circle socialization;
before the obtaining of the target social information of the target social behavior, the method further includes:
receiving a second input of the target file by the user;
in response to the second input, sharing the target file to a social circle of a second social program;
the obtaining of the target social information of the target social behavior includes:
obtaining target social circle social information associated with the target file in the social circle;
wherein the target social information comprises: the target social circle social information and a target social principal corresponding to the target social circle social information; the target social circle social information comprises at least one of: like, share, comment.
4. The method of claim 1, wherein the target social information further comprises target text dialog information corresponding to the target voice dialog information;
after the target voice dialogue information related to the target file in the voice information collected by the microphone is extracted, the method further comprises the following steps:
and converting the target voice dialogue information into target text dialogue information.
5. The method of claim 1, wherein after storing the target social information and target identification association, further comprising:
receiving a third input of the target file by the user;
outputting the target social information in response to the third input.
6. The method of claim 5, wherein the target file includes thereon a target voice dialog information identifier;
the receiving of the third input of the target file by the user comprises:
receiving a third input of the target voice conversation information identification by the user;
the outputting the target social information comprises:
and playing the target voice dialogue information indicated by the target voice dialogue information identification.
7. The method of claim 5, wherein the target file includes an additional information control thereon;
the receiving of the third input of the target file by the user comprises:
receiving a third input of the additional information control by the user;
the outputting the target social information comprises:
and playing the environment information associated with the additional information control.
8. The method of claim 1, wherein after storing the target social information and target identification association, further comprising:
acquiring a search keyword input by a user;
and outputting the target file and the target social information under the condition that the search keyword is matched with at least one item of information in the target social information.
9. The method of claim 1, wherein prior to storing the target social information and target identification association, further comprising:
generating a target identifier based on the target file;
and writing the target identification into an expansion area of the target file.
10. A document processing apparatus, characterized by comprising:
the acquisition module is used for acquiring target social information of the target social behavior under the condition that a social object of the target social behavior is a target file; the target social behavior comprises a face-to-face conversation related to a document;
the storage module is used for storing the target social information and the target identification in an associated manner;
the target identification is preset target identification associated with the target file; the target social information includes at least one of: target chat information, target social circle social information, target voice conversation information and environment information of a target social behavior place;
the input unit is used for starting a microphone to collect voice information under the condition that the target file is displayed on the target display screen;
the processor is used for extracting target voice dialogue information related to the target file from the voice information collected by the microphone; acquiring environmental information of the target social behavior place; wherein the target social information comprises: the target voice conversation information, a target social main body corresponding to the target voice conversation information and environment information of a target social behavior occurrence place; the environmental information includes at least one of: weather, geographic location, time.
11. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the file processing method according to any one of claims 1 to 9.
CN202010617768.9A 2020-06-30 2020-06-30 File processing method and electronic equipment Active CN111857467B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010617768.9A CN111857467B (en) 2020-06-30 2020-06-30 File processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010617768.9A CN111857467B (en) 2020-06-30 2020-06-30 File processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111857467A CN111857467A (en) 2020-10-30
CN111857467B true CN111857467B (en) 2022-02-15

Family

ID=72989257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010617768.9A Active CN111857467B (en) 2020-06-30 2020-06-30 File processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111857467B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8738394B2 (en) * 2007-11-08 2014-05-27 Eric E. Kuo Clinical data file
CN103812909A (en) * 2012-11-14 2014-05-21 财团法人资讯工业策进会 Method and system for providing file-associated community interaction under cloud storage service
CN104348848B (en) * 2013-07-25 2018-11-13 北京三星通信技术研究有限公司 Manage method, terminal device and the server of picture
CN104125483A (en) * 2014-07-07 2014-10-29 乐视网信息技术(北京)股份有限公司 Audio comment information generating method and device and audio comment playing method and device
CN105608100A (en) * 2015-08-31 2016-05-25 南京酷派软件技术有限公司 Information extraction method and information extraction device
CN105373587A (en) * 2015-10-14 2016-03-02 深圳市金立通信设备有限公司 Picture display method and terminal
CN107911740A (en) * 2017-09-30 2018-04-13 广东南都全媒体网络科技有限公司 A kind of method and device of the sound collecting based on video playing

Also Published As

Publication number Publication date
CN111857467A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN111884908B (en) Contact person identification display method and device and electronic equipment
CN112231463A (en) Session display method and device, computer equipment and storage medium
CN111865760B (en) Message display method and device
CN112947807A (en) Display method and device and electronic equipment
CN112533072A (en) Image sending method and device and electronic equipment
CN111523053A (en) Information flow processing method and device, computer equipment and storage medium
CN114827068A (en) Message sending method and device, electronic equipment and readable storage medium
CN111930281B (en) Reminding message creating method and device and electronic equipment
CN111666498B (en) Friend recommendation method based on interaction information, related device and storage medium
CN113253903A (en) Operation method and operation device
CN111897474A (en) File processing method and electronic equipment
CN112181351A (en) Voice input method and device and electronic equipment
WO2023011300A1 (en) Method and apparatus for recording facial expression of video viewer
CN111857467B (en) File processing method and electronic equipment
CN114374663B (en) Message processing method and message processing device
CN113312662B (en) Message processing method and device and electronic equipment
CN115695355A (en) Data sharing method and device, electronic equipment and medium
CN112383666B (en) Content sending method and device and electronic equipment
CN114221923A (en) Message processing method and device and electronic equipment
CN113688260A (en) Video recommendation method and device
CN113268961A (en) Travel note generation method and device
CN114629869B (en) Information generation method, device, electronic equipment and storage medium
CN112860147B (en) Electronic equipment operation method and device and electronic equipment
CN115361588B (en) Object display method and device, electronic equipment and storage medium
CN115883712A (en) Information display method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant