CN113313789B - File generation method and device and electronic equipment - Google Patents

File generation method and device and electronic equipment Download PDF

Info

Publication number
CN113313789B
CN113313789B CN202110594036.7A CN202110594036A CN113313789B CN 113313789 B CN113313789 B CN 113313789B CN 202110594036 A CN202110594036 A CN 202110594036A CN 113313789 B CN113313789 B CN 113313789B
Authority
CN
China
Prior art keywords
target
content
input
file
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110594036.7A
Other languages
Chinese (zh)
Other versions
CN113313789A (en
Inventor
高菊
王运
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110594036.7A priority Critical patent/CN113313789B/en
Publication of CN113313789A publication Critical patent/CN113313789A/en
Application granted granted Critical
Publication of CN113313789B publication Critical patent/CN113313789B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a file generation method, a file generation device and electronic equipment, and belongs to the technical field of electronics. The file generation method comprises the following steps: receiving a first input; responsive to the first input, determining a target social object and a target topic; acquiring the input target biological characteristic information of the target social object; acquiring target content matched with the target biological characteristic information from environment content acquired in a preset period according to the target biological characteristic information; the environment content comprises video content and picture content in the environment collected by a camera of the electronic equipment and audio content in the environment collected by a pickup of the electronic equipment; generating a target file associated with the target social object by taking the target theme as a center content according to the target content; the target file includes at least one of text content, audio content, video content, and picture content.

Description

File generation method and device and electronic equipment
Technical Field
The application belongs to the technical field of electronics, and particularly relates to a file generation method, a file generation device and electronic equipment.
Background
At present, with the arrival of fast-paced life style, face-to-face communication occasions gradually decrease, and instead, electronic devices have become main communication tools for people. When people communicate with electronic devices, some characters are usually edited to express emotion, so that emotion between people is improved.
For example, a friend may help to do a job, may write some thank you, and not just a simple thank you; for another example, after the heat of friends is spent, some feeling can be written instead of a simple feeling; as another example, by cooperating with the counterpart, the counterpart is perceived to be very excellent, and some praise words can be written instead of simple words.
Therefore, in the prior art, the user wants to better express emotion, needs to have some expression capability, and also needs to take some time to edit some characters, so that the communication efficiency of the user is lower.
Disclosure of Invention
The embodiment of the application aims to provide a file generation method, which can solve the problems that in the prior art, a user wants to better express emotion, needs to have some expression capacity, and needs to take some time to edit some characters, so that the user communication efficiency is lower.
In a first aspect, an embodiment of the present application provides a file generating method, where the method includes: connecting a first input; responsive to the first input, determining a target social object and a target topic; acquiring the input target biological characteristic information of the target social object; acquiring target content matched with the target biological characteristic information from environment content acquired in a preset period according to the target biological characteristic information; the environment content comprises video content and picture content in the environment collected by a camera of the electronic equipment and audio content in the environment collected by a pickup of the electronic equipment; generating a target file associated with the target social object by taking the target theme as a center content according to the target content; the target file includes at least one of text content, audio content, video content, and picture content.
In a second aspect, an embodiment of the present application provides a file generating apparatus, including: a first input receiving module for receiving a first input; a first input response module for determining a target social object and a target topic in response to the first input; the first acquisition module is used for acquiring the input target biological characteristic information of the target social object; the matching module is used for acquiring target content matched with the target biological characteristic information from environment content acquired in a preset period according to the target biological characteristic information; the environment content comprises video content and picture content in the environment collected by a camera of the electronic equipment and audio content in the environment collected by a pickup of the electronic equipment; the generation module is used for generating a target file associated with the target social object by taking the target theme as a center content according to the target content; the target file includes at least one of text content, audio content, video content, and picture content.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction implementing the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
Thus, in the embodiment of the application, the user can determine the target social object and the target theme through the first input, so that the electronic equipment acquires the target biological characteristic information of the recorded target social object, and then the target biological characteristic information is matched with the environment content acquired by the electronic equipment in the preset period. The environment content collected by the electronic equipment comprises video content and picture content in the environment collected by a camera of the electronic equipment and audio content in the environment collected by a pickup of the electronic equipment. Thus, the electronic device may match, as the target content, the speaking content of the target social object, an image through which things related to the target social object pass, an image through which the target social object passes, and the like among the environmental content collected by the electronic device, based on the sound feature and the facial feature in the target biometric information. Further, the target content is taken as a material, the target theme is taken as a central idea, the target social object is taken as a core character, and a target file is generated, wherein the file content comprises at least one of characters, voices, dynamic images, photos and the like. Therefore, the embodiment of the application can intelligently edit the target file according to the user behavior activities and the emotion which the user wants to express and the emotion expression object so as to help the user to express emotion better and improve the efficiency of the user in the communication scene.
Drawings
FIG. 1 is a flow chart of a file generation method of an embodiment of the present application;
FIG. 2 is a block diagram of a file generating apparatus according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
fig. 4 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The file generating method provided by the embodiment of the application is described in detail through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a file generation method according to an embodiment of the present application is shown, where the method is applied to an electronic device, and includes:
step S1: a first input is received.
Step S2: in response to the first input, a target social object and a target topic are determined.
The first input includes a touch input made by a user on the screen, not limited to a click, a slide, a drag, or the like input; the first input may also be a first operation, where the first operation includes a blank operation of the user, not limited to a gesture operation, a face operation, and the like, and the first operation also includes an operation of the user on a physical key on the device, not limited to an operation such as pressing. Moreover, the first input includes one or more inputs, wherein the plurality of inputs may be continuous or intermittent.
The file generation method of the present embodiment may be implemented in the first application program.
Correspondingly, the first input in this step is for the user to determine a target social object and a target topic in the first application.
For example, the user may select a target social object identification and a target topic identification through a first input.
The identifier in the application is used for indicating characters, symbols, images, interfaces, time and the like of the information, and controls or other containers can be used as carriers for displaying the information, including but not limited to character identifiers, symbol identifiers and image identifiers.
Specifically, the target social object identifier is used for indicating the name, the head portrait, the account number and the like of the target social object; the target subject identifies a name, icon, or the like for indicating the target subject.
In this embodiment, the target social object is a user in the first application.
Alternatively, the user may log into the first application and add friends.
Alternatively, the number of target social objects may be one, and the number of target social objects may be plural.
Optionally, the target social object includes the user itself, and may also include the user's friends.
Illustratively, the user may select the target social object identification in a buddy list of the first application.
In this embodiment, the target topic is a topic provided in the first application for generating a file.
Alternatively, the number of target subjects may be one, and the number of target subjects may be plural.
Alternatively, each topic may be represented by a set of keywords or a sentence.
Optionally, a plurality of topic identifiers generated automatically may be provided in the first application program, and the first application program may also provide a custom approach to add topic identifiers by user customization.
Alternatively, the content of the theme may be related to the behavior. For example, the content of the theme includes: thank you, surface you, sorry etc.
Alternatively, the content of the theme may be mood-related. For example, the content of the theme includes: happy, satisfactory, etc.
Optionally, the content of the subject matter may also be related to specific events. For example, the content of the theme includes: travel in XX city, make guests in XX family, etc.
Optionally, the content of the theme may also be a specific event in combination with user emotion, behavior, etc. For example, the content of the theme includes: thanks to the enthusiasm to wait for the guest in XX.
Alternatively, the subject matter may also relate to more various aspects of information, not limited in this regard.
In addition, the embodiment is not limited to the user determining the corresponding target social object and the target theme through the first input of the identifier, and the user may also determine the corresponding target social object and the target theme through more achievable input modes.
For example, the user may edit the specific content through the first input to determine the edited specific content as the target subject in the present embodiment.
For example, the user may edit the specific account through the first input, so as to determine the social object corresponding to the edited specific account as the target social object in the embodiment.
Step S3: target biometric information of the entered target social object is obtained.
In this step, based on the user determination by the first input: and obtaining target biological characteristic information by the target social object which needs to express emotion and emotion which needs to be expressed and is related to the target theme.
For example, in one scenario, when the target social object includes the user itself and the user's friends, the file may be generated with the user itself as the first person and the user's friends as the second person. Correspondingly, the target biometric information comprises first biometric information of friends of the user and second biometric information of the user.
For example, in yet another scenario, when the target social object includes the user itself and the user's friends, the file may be generated with both the user itself and the user's friends as the first person names. For example, the user itself and the user's friends are referred to as "we". Correspondingly, the target biometric information comprises first biometric information of friends of the user and second biometric information of the user.
In another scenario, the file is generated with the user's friends as the third person name when the target social object does not include the user itself. For example, a user's friends are referred to as "he/she", "they/themselves". Correspondingly, the target biometric information includes first biometric information of the user's friends.
In more scenarios, the present embodiment is not limited thereto. And when generating the file, the specific scene corresponds to the file, and the specific scene can depend on the selected target subject content.
For example, when the target social object includes the user and friends of the user, and the subject content includes "thank you", the user may thank you for the mouth kiss of the friends, and a thank you article may be generated.
Optionally, the biometric information includes facial feature information and voice feature information.
Alternatively, the user may enter in advance in the first application the first biometric information of the user's friends and the second biometric information of the user themselves.
For the first biological characteristic information, the user can upload photos and voice messages of friends of the user; for the second biometric information, the user may grant the first application access to facial recognition information and voice command information entered by the user.
Step S4: and acquiring target content matched with the target biological characteristic information from the environment content acquired in the preset time period according to the target biological characteristic information.
The environment content comprises video content and picture content in the environment collected by the camera of the electronic equipment and audio content in the environment collected by the pick-up of the electronic equipment.
Prior to this step, the electronic device may collect the environmental content in real-time.
The collected environment content can show daily behavior activities of the user.
In this embodiment, the electronic device may collect the environmental content in real time and store the collected environmental content for file generation in this embodiment. And to avoid storing too much content in the electronic device, only ambient content may be stored for a specified period of time.
For example, the specified period is the last month. Further, within the month, it is also possible to define specifically which day to collect and which time period to collect each day.
Therefore, not only too much content stored in the electronic equipment can be avoided, but also environmental content can be acquired according to the requirements of the user, and the acquisition in unnecessary time periods, which leads to the acquisition of the privacy behaviors of the user, can be avoided.
Specifically, the electronic device may collect the words spoken by the user, the words spoken by surrounding people, the external environment in which the electronic device is located, and so on.
Optionally, the preset period in this step is a specified period.
Optionally, the user can manually select a certain period as a preset period within a specified period according to the required materials of the target file to be generated.
For example, the user travels under the friends 'accompanies 2021.03.01 to 2021.03.08, and may select 2021.03.01 to 2021.03.08 as the preset period in order to thank for the friends' accompanies.
Therefore, in this step, the target content can be matched according to the target biometric information based on the environmental content stored in the background for the preset period.
For example, based on the sound characteristic information, matching to the speaking content of the target social object; based on the facial feature information, photos, videos, and photos, videos of things done, that match where the target social object is located.
Step S5: and generating a target file associated with the target social object by taking the target theme as the center content according to the target content.
The target file comprises at least one of text content, audio content, video content and picture content.
Optionally, before generating the target file, the user may select a type of the target file, such as at least one of a chapter type, an album type, and an audio type, through a manual operation; further, the user may also select, through manual operation, a content type included in the target file, such as at least one of a text type, an audio type, a video type, and a picture type. Further, in the case that the target file includes text type content, the user can also customize a specific word number through manual operation.
For reference, the object file may be an article including 1500-2000 words, and also including some illustrations, dynamic images, voices, etc.
The central idea of the whole target file is that: and expressing emotion related to the target subject by taking the target social object as a core character.
The target content can be used as a material, and a plurality of templates, writing modes and the like which are related to the target subject in the big data are used as an overall framework to generate the target file.
Thus, in the embodiment of the application, the user can determine the target social object and the target theme through the first input, so that the electronic equipment acquires the target biological characteristic information of the recorded target social object, and then the target biological characteristic information is matched with the environment content acquired by the electronic equipment in the preset period. The environment content collected by the electronic equipment comprises video content and picture content in the environment collected by a camera of the electronic equipment and audio content in the environment collected by a pickup of the electronic equipment. Thus, the electronic device may match, as the target content, the speaking content of the target social object, an image through which things related to the target social object pass, an image through which the target social object passes, and the like among the environmental content collected by the electronic device, based on the sound feature and the facial feature in the target biometric information. Further, the target content is taken as a material, the target theme is taken as a central idea, the target social object is taken as a core character, and a target file is generated, wherein the file content comprises at least one of characters, voices, dynamic images, photos and the like. Therefore, the embodiment of the application can intelligently edit the target file according to the user behavior activities and the emotion which the user wants to express and the emotion expression object so as to help the user to express emotion better and improve the efficiency of the user in the communication scene.
In the flow of the file generating method according to another embodiment of the present application, before step S4, the method further includes:
step A1: the method comprises the steps of collecting audio content and video content in the environment within a preset period, and collecting picture content in the environment at a preset moment within the preset period.
Before step A1, the user may grant the first application program access rights to functions such as recording, video, photographing, etc., so that the electronic device may automatically collect the environmental content.
In the first aspect, the electronic device can record sounds in the environment in real time through a microphone and other structures; sounds in the environment include the user himself, what others say.
In the second aspect, the electronic device can acquire scenes in the environment in real time through structures such as a camera and the like so as to form videos; scenes in an environment include where, what the user is going through.
In a third aspect, the electronic device may perform timing acquisition of a scene in the environment to form a picture through a camera or the like.
Optionally, a photo is taken at intervals of a preset time period according to a user setting or a device setting. For example, every 5 minutes.
Optionally, according to parameters such as definition of the image collected by the camera in real time, automatically taking a photo under the condition that the parameters such as definition reach a preset standard, so as to ensure the quality of the photo.
The photographing time corresponds to the preset time in this embodiment.
In the embodiment, through automatically acquiring audio, video, pictures and the like, files related to social objects selected by the user can be automatically generated according to the theme selected by the user, so that the group user is helped to express emotion quickly and conveniently.
On the basis of the embodiment, the user can grant the access right of the first application program to the positioning function, so that the electronic equipment can also automatically acquire the positioning information. Therefore, the specific positions of the behaviors can be automatically acquired while the daily life behaviors of the user are acquired, so that the positions of the target social objects are reflected in the target file, the content of the target file is further enriched, and the emotion of the user is enabled to be more truly represented.
In the flow of the file generating method according to another embodiment of the present application, before step S5, at least any one of the following is further included:
step B1: and acquiring target social content matched with the target biological characteristic information in the social application program according to the target biological characteristic information.
Step B2: a target image matching the target biometric information is acquired in an image management program.
In this step, on the one hand, target social content about the target social object in the social application may be obtained.
The target social content comprises related pictures, videos, words, voices and the like.
For example, the picture published by the social circle comprises the target social object and can be used as target social content; further, the text corresponding to the picture can also be used as the target social content.
Alternatively, a target image may be obtained in the image management program for a target social object.
The target image includes a moving image such as a video and a still image such as a photograph.
For example, in the image management program, an image including the target social object is taken as the target image.
Optionally, the image management program comprises an album.
Illustratively, in one scenario, the target social object includes the user's own person and the user's friends, and the social application may obtain photos of both persons, and may also have photos, videos, such as a photo of childhood of both persons, a graduation, etc., common to both persons in the local album.
Correspondingly, step S5 includes:
Substep B3: and generating a target file associated with the target social object by combining at least one of the target social content and the target image and taking the target subject as the center content according to the target content.
In the step, based on daily life behaviors collected by the electronic equipment at ordinary times as material resources, related materials obtained from other application programs of the electronic equipment are further combined to expand the material resources, so that materials related to a target theme can be sorted out from the material resources to generate a target file.
For example, after the user tours with friends of the user, what the user sees during the current tour is used as a material resource, then the current photo of the two people is acquired from the electronic equipment, the material resource is enlarged, and finally the user can learn friends together from recall of the past childhood of the two people, and the user can gather the tour until the user tours, and can also include a thinking about the beautiful life in the future, so that a thank you letter article in the range of 1500-2000 words is generated.
In this embodiment, the content in other application programs in the electronic device may be used as the supplementary material of the target file while the environmental content, that is, the daily life behavior of the user, is used as the material of the target file, so that the generated target file is more real and more fit to the emotion of the user.
In a flow of a file generating method according to another embodiment of the present application, the first input includes at least:
A sub-input of an identification of the first topic; and
Sub-input of an identification of a target topic in a directory to which the identification of the first topic corresponds;
The catalog corresponding to the identification of the first theme at least comprises the identification of the target theme.
In this embodiment, in order to facilitate finer selection by the user, a plurality of primary topics are first divided. Such as "thank you", "praise", "blessing", etc., while supporting custom input of primary topics.
The first theme in this embodiment is included in the plurality of primary themes.
For example, the user selects a "thank you" topic identification, i.e., a "thank you" topic as the first topic, from among the plurality of primary topic identifications displayed through one sub-input in the first input.
Further, the thank you theme identifier includes a plurality of secondary themes in the corresponding catalog. For example, "thank you for enthusiasm during travel," "thank you for work help," etc., while supporting custom input of secondary topics.
The plurality of secondary topics include the target topic in this embodiment.
For example, the user selects, from among the displayed plurality of secondary theme identifications, a theme identification of "thank you for heat during travel", that is, a theme of "thank you for heat during travel" as a target theme through one sub-input in the first input.
Further, more levels of theme may be included, such as three levels of theme, four levels of theme, etc.
Alternatively, the user may select an N-level theme as the target theme. Correspondingly, the user needs to make N sub-inputs, each for selecting a topic identification in one of the levels.
In this embodiment, a plurality of topics may be divided, each topic may be further divided into a plurality of sub-topics, and so on, so that the topic content is continuously refined, so that a user may freely select any topic to generate a target file. The finer the theme is divided, the more real the generated file content is, the finer the emotion is, and the more the user's requirements can be met, so that the user is effectively helped to promote communication efficiency.
In the file generation method of another embodiment of the present application, multiple files may be generated with the same theme and the same material resource, so that the user may select preferentially among the multiple files, thereby providing more choices for the user.
In the flow of the file generating method according to another embodiment of the present application, after step S5, the method further includes:
Step C1: a second input is received of the first content of the target file.
The second input includes a touch input made by the user on the screen, not limited to a click, a slide, a drag, or the like input; the second input may also be a second operation, where the second operation includes a blank operation by the user, not limited to a gesture operation, a face operation, and the like, and the second operation also includes an operation by the user on a physical key on the device, not limited to an operation such as pressing. Moreover, the second input includes one or more inputs, wherein the plurality of inputs may be continuous or intermittent.
The second input is used to manually modify the content in the automatically generated object file.
For example, after the target file is generated, the user may click on the "edit" control, so that the target file enters an edit state, and the user may perform input such as deletion, writing, clippings, copying, and the like on the first content in the target file.
Wherein the first content includes at least one of text, picture, video, voice, punctuation, space, blank space, etc.
Step C2: in response to the second input, the first content is replaced with second content associated with the second input.
In this step, the second content is content written by the user through the second input.
Wherein the second content includes at least one of text, picture, video, voice, punctuation, space, blank space, etc.
In this embodiment, based on the automatically generated target file, the user may also perform manual modification to reach the user's requirement, so as to more truly express the emotion of the user. Therefore, on the basis of intelligently generating the target file, the user can achieve the expected effect only by slightly modifying the target file, so that the communication effect of the user can be improved, and the personalized requirements of the user can be met.
In the flow of the file generating method according to another embodiment of the present application, before step S5, the method further includes:
step D1: a third input is received.
Step D2: in response to the third input, a target capacity associated with the target file is set.
Wherein the target capacity comprises at least any one of: the capacity of text content, the capacity of audio content, the capacity of video content, the capacity of picture content, the capacity of a target file in the target file.
The third input includes a touch input made by the user on the screen, not limited to a click, a slide, a drag, or the like input; the third input may also be a third operation, where the third operation includes a blank operation by the user, not limited to a gesture operation, a face operation, and the like, and the third operation also includes an operation by the user on a physical key on the device, not limited to an operation such as pressing. Moreover, the third input includes one or more inputs, wherein the plurality of inputs may be continuous or intermittent.
And the third input is used for limiting the relevant capacity of the target file to be generated.
For example, the user can set the text content, the audio content, the video content, the picture content and the like included in the target file, so that the overall capacity of the target file is automatically calculated based on the capacity of each content set by the user and is used for reference by the user, and the user can adjust the capacity of each content according to the requirement to meet the requirement of the user on the capacity of the file.
As another example, the user may also set the capacity of a certain item or items of content individually.
For another example, the user can also set the capacity of the target file independently, so that in the process of generating the file, the capacity of each content is reasonably distributed according to the set capacity of the file, so as to meet the requirement of the user on the whole capacity of the file.
Optionally, the capacity of the text content is set by the user in the form of the number of words of the text; the capacity of the audio content is set by a user in the form of the playing time of the audio; the capacity of the video content is set by a user in the form of the playing time of the video; the capacity of the picture content is in the form of the number of pictures for the user to set; the capacity of the target file is set by the user in the form of specific capacity units.
In this embodiment, the user may not only customize the content type included in the file, but also customize the capacity size of any content and the overall capacity size of the file, so as to generate a file with a reasonable size according to the requirement of the user, so that the user may flexibly use the file generated by the present application. For example, a message of a few hundred words may be sent to the other party; as another example, a long video may be sent to the other party.
In the flow of the file generating method according to another embodiment of the present application, after step S5, the method further includes:
Step E1: a fourth input is received for the target file and the target mode.
The fourth input includes a touch input made by the user on the screen, not limited to a click, a slide, a drag, or the like input; the fourth input may also be a third operation, where the third operation includes a blank operation by the user, not limited to a gesture operation, a face operation, and the like, and the third operation also includes an operation by the user on a physical key on the device, not limited to an operation such as pressing. Moreover, the fourth input includes one or more inputs, wherein the plurality of inputs may be continuous or intermittent.
The fourth input is for sharing the target file in the application.
In one aspect, a user may share a target file to a public social area, such as in a broadcast format; the user may share the target file to a friend or a group, such as in private chat or group chat.
On the other hand, the users can share in a real-name form; users may also share in an anonymous form.
Wherein, the user can select the corresponding form through the fourth input as the target mode.
Step E2: in response to the fourth input, the target file is displayed at the target interface.
Alternatively, the target interface may be an interface of the current application, i.e. an interface of the first application; the target interface may also be an interface of other applications, such as a chat application.
Alternatively, the target interface may be a chat interface, and may also be a social public platform interface.
In the embodiment, the user can share the target file to more people, so that the emotion of the user can be better expressed, and the user can be helped to quickly establish emotion with other people, thereby achieving the purpose of effective communication.
In summary, the application aims to provide an application program for improving the taste of people so as to realize an automatic file generation method, so that emotion expression among people is more convenient and more accurate, and a relationship among people is promoted. The ideas can be transmitted in the form of documents such as articles and the like according to the actual occurrence, cordial are provided with picture feeling, and the emotion communication among people is promoted, so that the emotion of people is promoted. Therefore, the application does not need more effort of users to edit and arrange, is not limited by the difference of the expression capability among people, and ensures that the accuracy and the grace of the content shared by the users are higher.
It should be noted that, in the file generating method provided in the embodiment of the present application, the execution body may be a file generating device, or a control module in the file generating device for executing the file generating method. In the embodiment of the present application, a method for executing file generation by a file generation device is taken as an example, and the file generation device provided by the embodiment of the present application is described.
Fig. 2 shows a block diagram of a file generating apparatus according to another embodiment of the present application, the apparatus comprising:
A first input receiving module 10 for receiving a first input;
A first input response module 20 for determining a target social object and a target topic in response to a first input;
a first obtaining module 30, configured to obtain target biometric information of the entered target social object;
a matching module 40, configured to obtain, from the environmental content collected in the preset period, target content matched with the target biometric information according to the target biometric information; the environment content comprises video content and picture content in the environment collected by a camera of the electronic equipment and audio content in the environment collected by a pickup of the electronic equipment;
The generating module 50 is configured to generate, according to the target content, a target file associated with the target social object with the target subject as a center content; the target file includes at least one of text content, audio content, video content, and picture content.
Thus, in the embodiment of the application, the user can determine the target social object and the target theme through the first input, so that the electronic equipment acquires the target biological characteristic information of the recorded target social object, and then the target biological characteristic information is matched with the environment content acquired by the electronic equipment in the preset period. The environment content collected by the electronic equipment comprises video content and picture content in the environment collected by a camera of the electronic equipment and audio content in the environment collected by a pickup of the electronic equipment. Thus, the electronic device may match, as the target content, the speaking content of the target social object, an image through which things related to the target social object pass, an image through which the target social object passes, and the like among the environmental content collected by the electronic device, based on the sound feature and the facial feature in the target biometric information. Further, the target content is taken as a material, the target theme is taken as a central idea, the target social object is taken as a core character, and a target file is generated, wherein the file content comprises at least one of characters, voices, dynamic images, photos and the like. Therefore, the embodiment of the application can intelligently edit the target file according to the user behavior activities and the emotion which the user wants to express and the emotion expression object so as to help the user to express emotion better and improve the efficiency of the user in the communication scene.
Optionally, the apparatus further comprises:
the acquisition module is used for acquiring audio content and video content in the environment within a preset period and acquiring picture content in the environment at a preset moment within the preset period.
Optionally, the apparatus further comprises at least any one of the following:
The second acquisition module is used for acquiring target social content matched with the target biological characteristic information in the social application program according to the target biological characteristic information;
a third acquisition module for acquiring a target image matched with the target biological feature information in the image management program;
The generating module 50 includes:
and the target file generation unit is used for generating a target file associated with the target social object by combining at least one of the target social content and the target image according to the target content and taking the target subject as the center content.
Optionally, the first input comprises at least:
A sub-input of an identification of the first topic; and
Sub-input of an identification of a target topic in a directory to which the identification of the first topic corresponds;
The catalog corresponding to the identification of the first theme at least comprises the identification of the target theme.
Optionally, the apparatus further comprises:
A second input receiving module for receiving a second input of the first content of the target file;
And a second input response module for replacing the first content with second content associated with the second input in response to the second input.
Optionally, the apparatus further comprises:
a third input receiving module for receiving a third input;
A third input response module for setting a target capacity associated with the target file in response to the third input;
wherein the target capacity comprises at least any one of: the content of the text, the content of the audio, the content of the video, the content of the picture and the content of the target file.
The file generating device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and the non-mobile electronic device may be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., and the embodiments of the present application are not limited in particular.
The file generating device in the embodiment of the application can be a device with an action system. The action system may be an Android (Android) action system, an ios action system, or other possible action systems, and the embodiment of the application is not limited specifically.
The file generating device provided by the embodiment of the application can realize each process realized by the embodiment of the method, and in order to avoid repetition, the description is omitted.
Optionally, as shown in fig. 3, the embodiment of the present application further provides an electronic device 100, including a processor 101, a memory 102, and a program or an instruction stored in the memory 102 and capable of running on the processor 101, where the program or the instruction implements each process of the above-mentioned file generating method embodiment when executed by the processor 101, and the process can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 4 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, and processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein the user input unit 1007 is configured to receive a first input; a processor 1010 for determining a target social object and a target topic in response to the first input; acquiring the input target biological characteristic information of the target social object; acquiring target content matched with the target biological characteristic information from environment content acquired in a preset period according to the target biological characteristic information; the environment content comprises video content and picture content in the environment collected by a camera of the electronic equipment and audio content in the environment collected by a pickup of the electronic equipment; generating a target file associated with the target social object by taking the target theme as a center content according to the target content; the target file includes at least one of text content, audio content, video content, and picture content.
Thus, in the embodiment of the application, the user can determine the target social object and the target theme through the first input, so that the electronic equipment acquires the target biological characteristic information of the recorded target social object, and then the target biological characteristic information is matched with the environment content acquired by the electronic equipment in the preset period. The environment content collected by the electronic equipment comprises video content and picture content in the environment collected by a camera of the electronic equipment, and audio content in the environment collected by a pickup of the electronic equipment. Thus, the electronic device may match, as the target content, the speaking content of the target social object, an image through which things related to the target social object pass, an image through which the target social object passes, and the like among the environmental content collected by the electronic device, based on the sound feature and the facial feature in the target biometric information. Further, the target content is taken as a material, the target theme is taken as a central idea, the target social object is taken as a core character, and a target file is generated, wherein the file content comprises at least one of characters, voices, dynamic images, photos and the like. Therefore, the embodiment of the application can intelligently edit the target file according to the user behavior activities and the emotion which the user wants to express and the emotion expression object so as to help the user to express emotion better and improve the efficiency of the user in the communication scene.
Optionally, the processor 1010 is further configured to collect audio content and video content in the environment during the preset period, and collect picture content in the environment at a preset time during the preset period.
Optionally, the processor 1010 is further configured to obtain, in the social application program, target social content matching the target biometric information according to the target biometric information; acquiring a target image matched with the target biological characteristic information in an image management program; and generating a target file associated with the target social object by combining at least one of the target social content and the target image and taking the target subject as center content according to the target content.
Optionally, the first input includes at least: a sub-input of an identification of the first topic; and, sub-input of an identification of the target topic in a directory to which the identification of the first topic corresponds; and the catalog corresponding to the identification of the first theme at least comprises the identification of the target theme.
Optionally, the user input unit 1007 is further configured to receive a second input of the first content of the target file; the processor 1010 is further configured to replace the first content with second content associated with the second input in response to the second input.
Optionally, the user input unit 1007 is further configured to receive a third input; a processor 1010 further configured to set a target capacity associated with the target file in response to the third input; wherein the target capacity comprises at least any one of: the content of the text, the content of the audio, the content of the video, the content of the picture and the content of the target file.
In summary, the application aims to provide an application program for improving the taste of people so as to realize an automatic file generation method, so that emotion expression among people is more convenient and more accurate, and a relationship among people is promoted. The ideas can be transmitted in the form of documents such as articles and the like according to the actual occurrence, cordial are provided with picture feeling, and the emotion communication among people is promoted, so that the emotion of people is promoted. Therefore, the application does not need more effort of users to edit and arrange, is not limited by the difference of the expression capability among people, and ensures that the accuracy and the grace of the content shared by the users are higher.
It should be appreciated that in embodiments of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, where the graphics processor 10041 processes image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 1009 may be used to store software programs as well as various data including, but not limited to, application programs and an action system. The processor 1010 may integrate an application processor that primarily processes action systems, user interfaces, applications, etc., with a modem processor that primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above-mentioned file generation method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-only memory (ROM), a random access memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the file generation method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (10)

1. A method of generating a file, the method comprising:
Receiving a first input;
Responsive to the first input, determining a target social object and a target topic;
acquiring the input target biological characteristic information of the target social object;
acquiring target content matched with the target biological characteristic information from environment content acquired in a preset period according to the target biological characteristic information; the environment content comprises video content and picture content in the environment collected by a camera of the electronic equipment and audio content in the environment collected by a pickup of the electronic equipment;
Generating a target file associated with the target social object by taking the target theme as a center content according to the target content; the target file includes at least one of text content, audio content, video content, and picture content.
2. The method according to claim 1, wherein the acquiring, from the environmental content acquired within a preset period of time according to the target biometric information, the target content matching the target biometric information further comprises:
And collecting the audio content and the video content in the environment within the preset time period, and collecting the picture content in the environment at the preset time within the preset time period.
3. The method of claim 1, wherein the generating the target file associated with the target social object based on the target content and centered on the target subject matter further comprises at least any one of:
acquiring target social content matched with the target biological characteristic information in a social application program according to the target biological characteristic information;
acquiring a target image matched with the target biological characteristic information in an image management program;
The generating, according to the target content, a target file associated with the target social object with the target subject as a center content includes:
And generating a target file associated with the target social object by combining at least one of the target social content and the target image and taking the target subject as center content according to the target content.
4. The method of claim 1, wherein the first input comprises at least:
A sub-input of an identification of the first topic; and
Sub-input of the identity of the target topic in the directory to which the identity of the first topic corresponds;
And the catalog corresponding to the identification of the first theme at least comprises the identification of the target theme.
5. The method of claim 1, wherein after generating the target file associated with the target social object based on the target content and centered on the target subject, further comprising:
receiving a second input of first content of the target file;
In response to the second input, the first content is replaced with second content associated with the second input.
6. The method of claim 1, wherein prior to generating the target file associated with the target social object, further comprising:
receiving a third input;
setting a target capacity associated with the target file in response to the third input;
wherein the target capacity comprises at least any one of: the content of the text, the content of the audio, the content of the video, the content of the picture and the content of the target file.
7. A document generating apparatus, the apparatus comprising:
a first input receiving module for receiving a first input;
a first input response module for determining a target social object and a target topic in response to the first input;
The first acquisition module is used for acquiring the input target biological characteristic information of the target social object;
The matching module is used for acquiring target content matched with the target biological characteristic information from environment content acquired in a preset period according to the target biological characteristic information; the environment content comprises video content and picture content in the environment collected by a camera of the electronic equipment and audio content in the environment collected by a pickup of the electronic equipment;
The generation module is used for generating a target file associated with the target social object by taking the target theme as a center content according to the target content; the target file includes at least one of text content, audio content, video content, and picture content.
8. The apparatus of claim 7, wherein the apparatus further comprises:
The acquisition module is used for acquiring the audio content and the video content in the environment within the preset time period and acquiring the picture content in the environment at the preset time within the preset time period.
9. The apparatus of claim 7, further comprising at least any one of:
The second acquisition module is used for acquiring target social content matched with the target biological characteristic information in a social application program according to the target biological characteristic information;
A third acquisition module for acquiring a target image matched with the target biological feature information in an image management program;
The generating module comprises:
And the target file generation unit is used for generating a target file associated with the target social object by combining at least one of the target social content and the target image and taking the target subject as a center content according to the target content.
10. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the file generation method of any of claims 1-6.
CN202110594036.7A 2021-05-28 2021-05-28 File generation method and device and electronic equipment Active CN113313789B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110594036.7A CN113313789B (en) 2021-05-28 2021-05-28 File generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110594036.7A CN113313789B (en) 2021-05-28 2021-05-28 File generation method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113313789A CN113313789A (en) 2021-08-27
CN113313789B true CN113313789B (en) 2024-04-26

Family

ID=77376384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110594036.7A Active CN113313789B (en) 2021-05-28 2021-05-28 File generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113313789B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108293054A (en) * 2015-12-15 2018-07-17 维萨国际服务协会 System and method for the biometric authentication for using social networks
WO2019132062A1 (en) * 2017-12-27 2019-07-04 주식회사 펍플 Method and device for generating electronic book file
CN111597468A (en) * 2020-05-08 2020-08-28 腾讯科技(深圳)有限公司 Social content generation method, device and equipment and readable storage medium
CN112836136A (en) * 2019-11-22 2021-05-25 腾讯科技(深圳)有限公司 Chat interface display method, device and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10489511B2 (en) * 2018-03-01 2019-11-26 Ink Content, Inc. Content editing using AI-based content modeling

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108293054A (en) * 2015-12-15 2018-07-17 维萨国际服务协会 System and method for the biometric authentication for using social networks
WO2019132062A1 (en) * 2017-12-27 2019-07-04 주식회사 펍플 Method and device for generating electronic book file
CN112836136A (en) * 2019-11-22 2021-05-25 腾讯科技(深圳)有限公司 Chat interface display method, device and equipment
CN111597468A (en) * 2020-05-08 2020-08-28 腾讯科技(深圳)有限公司 Social content generation method, device and equipment and readable storage medium

Also Published As

Publication number Publication date
CN113313789A (en) 2021-08-27

Similar Documents

Publication Publication Date Title
Leaver et al. Instagram: Visual social media cultures
Lobinger Photographs as things–photographs of things. A texto-material perspective on photo-sharing practices
US11209442B2 (en) Image selection suggestions
EP3407216B1 (en) Method and device for displaying information resource, and computer readable storage medium
WO2018057541A1 (en) Suggested responses based on message stickers
EP3528140A1 (en) Picture processing method, device, electronic device and graphic user interface
KR20150017015A (en) Method and device for sharing a image card
CN109660728B (en) Photographing method and device
US11775139B2 (en) Image selection suggestions
CN110147467A (en) A kind of generation method, device, mobile terminal and the storage medium of text description
CN108037869A (en) Message display method, device and terminal
CN112035031B (en) Note generation method and device, electronic equipment and storage medium
CN111865760A (en) Message display method and device
CN113313789B (en) File generation method and device and electronic equipment
Bareither Content-as-practice: Studying digital content with a media practice approach
CN115525783B (en) Picture display method and electronic equipment
CN111984767A (en) Information recommendation method and device and electronic equipment
CN114338572B (en) Information processing method, related device and storage medium
CN108520548A (en) Expression moving method
CN114679511A (en) Operation control method and device and electronic equipment
Syeda et al. Photo Alive!: elderly oriented social communication service
CN111339263A (en) Information recommendation method and device and electronic equipment
CN111857467B (en) File processing method and electronic equipment
CN113010397B (en) Social contact track generation method and social contact track generation device
Ha et al. I-Portrait: An Interactive Photograph System for Enhancing Social Presence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant