CN112597320A - Social information generation method, device and computer readable medium - Google Patents

Social information generation method, device and computer readable medium Download PDF

Info

Publication number
CN112597320A
CN112597320A CN202011428099.7A CN202011428099A CN112597320A CN 112597320 A CN112597320 A CN 112597320A CN 202011428099 A CN202011428099 A CN 202011428099A CN 112597320 A CN112597320 A CN 112597320A
Authority
CN
China
Prior art keywords
user
data
target
multimedia data
social
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011428099.7A
Other languages
Chinese (zh)
Inventor
胡晨鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhangmen Science and Technology Co Ltd
Original Assignee
Shanghai Zhangmen Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhangmen Science and Technology Co Ltd filed Critical Shanghai Zhangmen Science and Technology Co Ltd
Priority to CN202011428099.7A priority Critical patent/CN112597320A/en
Publication of CN112597320A publication Critical patent/CN112597320A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/41Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/45Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/483Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a social information generation method, equipment and a computer readable medium, wherein the social information generation method comprises the following steps: the method comprises the steps of obtaining release information submitted by a target first user in social application of first user equipment, and extracting multimedia data from the release information; identifying the multimedia data by using a neural network model, extracting corresponding style characteristic vectors, and grouping the multimedia data based on the style characteristic vectors; obtaining style characteristic data corresponding to the score data from a score database, and matching the style characteristic data with style characteristic vectors of each group of grouped multimedia data; and synthesizing each group of multimedia data and the music data corresponding to the style characteristic data matched with the multimedia data to form a plurality of groups of corresponding self-introduction contents. The invention can match suitable related data for different data according to different characteristics of the data to generate different synthetic data.

Description

Social information generation method, device and computer readable medium
Technical Field
The invention belongs to the technical field of electronic communication, relates to an information generation method, and particularly relates to a social information generation method and system, a social network information presentation method and system, a self-introduction content generation method, system, device and computer readable medium in a social network.
Background
Currently, with the progress of science and technology, many social software, such as WeChat and QQ, are presented to facilitate the interaction between users. In some social software, such as the friend circle function of WeChat, users often have a need to add personal covers.
The personal cover represents the introduction of the user to the user; but currently, most of the friend circle functions of social applications only provide the cover function of a still picture. In addition, the user can only provide a fixed set of cover content for all social friends.
Disclosure of Invention
The invention provides a social information generation method and system, a social information presentation method and system, and a self-introduction content generation method and system in a social network.
In order to solve the technical problem, according to one aspect of the present invention, the following technical solutions are adopted:
a social information generation method is applied to a server and comprises the following steps:
the method comprises the steps of obtaining release information submitted by a target first user in social application of first user equipment, and extracting multimedia data from the release information;
identifying the multimedia data by using a neural network model, extracting corresponding style characteristic vectors, and grouping the multimedia data based on the style characteristic vectors;
obtaining style characteristic data corresponding to the score data from a score database, and matching the style characteristic data with style characteristic vectors of each group of grouped multimedia data;
and synthesizing each group of multimedia data and the music data corresponding to the style characteristic data matched with the multimedia data to form a plurality of groups of corresponding self-introduction contents.
As an embodiment of the present invention, after acquiring the posting information submitted by the target first user in the social network, extracting the multimedia data from the posting information, the method further includes:
and screening out multimedia data meeting the user relevance from the multimedia data.
As an embodiment of the present invention, the multimedia data includes picture data and/or video data.
As an embodiment of the present invention, the multimedia data satisfying the user relevance includes at least any one of:
picture data containing portrait information of the target first user;
video data with at least one frame of picture containing portrait information of the target first user;
picture data and/or video data captured by a camera of a first user device of the target first user.
As an embodiment of the present invention, the acquiring style characteristic data corresponding to the score data from the score database, and matching the style characteristic data with the style characteristic vectors of each group of grouped multimedia data includes: and determining style feature data matched with the style feature vectors of each group of grouped multimedia data according to the principle that the cosine distance is closest on the basis of the second style music library index of the score database.
As an embodiment of the present invention, the method further includes: selecting score data of different styles from an existing score database to form a first style score database index;
calculating score data in a score database by using a convolutional neural network to obtain a feature vector of the score data; after operation, the first style curved bank index evolves into a vector matrix;
and carrying out vector quantization on the obtained index matrix by using a product vector algorithm to obtain a second style music library index, and establishing mapping from style characteristic data to score data.
As an embodiment of the present invention, the method further includes:
in the process of synthesizing each group of multimedia data and the music data corresponding to the style characteristic data matched with the multimedia data, if the multimedia data comprises video data and the video data comprises audio information, the audio information in the video data is adjusted based on a preset rule or target first user selection.
As an embodiment of the present invention, the method further includes:
determining relationship information between a target first user and a second user, wherein the second user is a user in a social application;
selecting self-introduction contents matched with the second user from the plurality of groups of self-introduction contents based on the relationship information.
As an embodiment of the present invention, determining the relationship information between the target first user and the second user includes:
acquiring social interaction data of the target first user and the second user;
determining relationship information of the target first user and the second user based on the social interaction data.
According to another aspect of the invention, the following technical scheme is adopted: a social information generation method is applied to second user equipment and comprises the following steps:
obtaining an access request of a second user to the social information of a target first user, wherein the access request is submitted by the second user in a social application of second user equipment, and the access request comprises self-introduction content of the target first user;
sending the access request to a server corresponding to the social application;
and acquiring self-introduction contents of the target first user matched with the second user, which are returned by the server, wherein the self-introduction contents of the target first user matched with the second user are determined from a plurality of groups of self-introduction contents corresponding to the target first user.
As an embodiment of the present invention, the multiple sets of self-introductions corresponding to the target first user are obtained by the server by:
the method comprises the steps of obtaining release information submitted by a target first user in social application of first user equipment, and extracting multimedia data from the release information;
identifying the multimedia data by using a neural network model, extracting corresponding style characteristic vectors, and grouping the multimedia data based on the style characteristic vectors;
obtaining style characteristic data corresponding to the score data from a score database, and matching the style characteristic data with style characteristic vectors of each group of grouped multimedia data;
and synthesizing each group of multimedia data and the music data corresponding to the style characteristic data matched with the multimedia data to form a plurality of groups of corresponding self-introduction contents.
As an embodiment of the present invention, one of the groups of self-introduction contents corresponding to the target first user is:
the server selects one group from the multiple groups of self-introduction contents based on the relationship information of the target first user and the second user.
In one embodiment of the invention, the self-introduction content of the target first user matched with the second user is presented in the social application of the second user device.
According to another aspect of the invention, the following technical scheme is adopted: a method of social information generation, the method comprising:
the method comprises the steps that a server obtains release information submitted by a target first user in social application of first user equipment, and extracts multimedia data from the release information;
the server identifies the multimedia data by using a neural network model, extracts a corresponding style characteristic vector, and groups the multimedia data based on the style characteristic vector;
the server acquires style characteristic data corresponding to the score data from a score database, and the style characteristic data are matched with the style characteristic vectors of each group of grouped multimedia data;
the server synthesizes each group of multimedia data and the music matching data corresponding to the style characteristic data matched with the multimedia data to form a plurality of groups of corresponding self-introduction contents;
the second user equipment acquires an access request for the social information of the target first user, which is submitted by a corresponding second user in a social application, wherein the access request comprises self-introduction content of the target first user;
the second user equipment sends the access request to the server;
and the second user equipment acquires self-introduction contents of the target first user matched with the second user, which are returned by the server, wherein the self-introduction contents of the target first user matched with the second user are a group determined from the multiple groups of self-introduction contents corresponding to the target first user.
According to another aspect of the invention, the following technical scheme is adopted: an apparatus for a social information generating method, the apparatus comprising a memory for storing computer program instructions and a processor for executing the computer program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform the method described above.
According to another aspect of the invention, the following technical scheme is adopted: a computer readable medium having stored thereon computer program instructions executable by a processor to implement a method as described above.
The invention has the beneficial effects that: according to the social information generation method, the social information generation device and the computer readable medium, the appropriate relevant data can be matched for different data according to different characteristics of the different data, and different synthetic data can be generated.
In one use scenario of the invention, the social application analyzes post content posted by a user in the past, extracts picture or video content with high relevance to the user, analyzes the content, groups the content according to the style of the content, generates different introduction video paragraphs, and selects proper music according to the style of the introduction video. When the friends of the user access the friend circles of the user, the system selects the introduction videos most suitable for the friends to display.
Drawings
Fig. 1 is a flowchart of a social information generating method (applied to a server) according to an embodiment of the present invention.
Fig. 2 is a flowchart of a social information generating method (applied to a second user terminal) in an embodiment of the present invention.
Fig. 3 is a flowchart of a social information generating method according to an embodiment of the present invention.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
For a further understanding of the invention, reference will now be made to the preferred embodiments of the invention by way of example, and it is to be understood that the description is intended to further illustrate features and advantages of the invention, and not to limit the scope of the claims.
The description in this section is for several exemplary embodiments only, and the present invention is not limited only to the scope of the embodiments described. It is within the scope of the present disclosure and protection that the same or similar prior art means and some features of the embodiments may be interchanged.
The steps in the embodiments in the specification are only expressed for convenience of description, and the implementation manner of the present application is not limited by the order of implementation of the steps.
The invention discloses a social information generation method, which is applied to a server, and FIG. 1 is a flow chart of the social information generation method in an embodiment of the invention; referring to fig. 1, in an embodiment of the present invention, the data generating method includes:
step S1, obtaining posting information submitted by the target first user in the social application of the first user device, and extracting multimedia data from the posting information.
In an embodiment of the present invention, in step S1, after acquiring the posting information submitted by the target first user in the social network, extracting multimedia data from the posting information, further includes: and screening out multimedia data meeting the user relevance from the multimedia data. The condition of the multimedia data satisfying the user correlation may be set by the server or may be set by the user. Through data screening, unnecessary data can be removed, and the data processing efficiency is improved.
In one usage scenario of the present invention, the first user device includes, but is not limited to, a computer, a mobile phone, a tablet computer, and the like. The target first user may issue the setting information through the input-output unit of the first user device. The server can extract multimedia data from the setting information issued by the target first user to serve as a material for subsequently generating self introduction; in an embodiment, the multimedia data comprises picture data and/or video data.
In an embodiment, the multimedia data satisfying the user relevance includes at least any one of: picture data containing portrait information of the target first user; video data with at least one frame of picture containing portrait information of the target first user; picture data and/or video data captured by a camera of user equipment of the target first user. By the method, the multimedia data irrelevant to the target first user can be filtered, the accuracy of data acquisition is improved, and the multimedia data irrelevant to the target first user is avoided during the classification of the multimedia data.
In an actual usage scenario of an embodiment of the present invention, the target first user uses the first user device to publish some multimedia data in a social application (e.g., WeChat friend circle application), for example, some picture data and video data may be published, or text descriptions of the picture data and the video data may be included. For example, the picture data may be a picture related to a work scene of the person (referring to a first target user, the same applies below), may be a picture related to a life scene of the person, and may also be a landscape picture, a social news picture, and the like; the video data can be videos related to working scenes of the person, can comprise videos related to living scenes of the person, and can also comprise landscape videos, social news videos and the like. Irrelevant multimedia data can be removed through filtering, and the system efficiency is improved.
Some or all of the buddies of the target first user (as second users) may view the corresponding multimedia data of the target first user. In addition, the server can screen the multimedia data while controlling the corresponding multimedia information to be presented on each second user device, filter out the multimedia data (such as social news pictures, social news videos and the like) irrelevant to the first target user, acquire the multimedia data relevant to the user relevance, and classify the multimedia data according to the classification setting.
Step S2, the multimedia data is identified using a neural network model, corresponding style feature vectors are extracted, and the multimedia data are grouped based on the style feature vectors.
In some embodiments of the present invention, step S2 includes the steps of: performing feature extraction on multimedia data in content released by a user by using a convolutional neural network to obtain style feature vectors [ Fu1, Fu2, … and Fun ]; and searching in a second style music library index (such as a second style music library index Ip) by using the style feature vector [ Fu1, Fu2, …. Fun ], and obtaining style feature data of the related multimedia data according to the principle that the cosine distance is closest. The efficiency of feature extraction can be improved through the method.
After obtaining the style characteristic vectors of the multimedia data, grouping the multimedia data according to the style characteristic vectors of the multimedia data; obtaining the style types (including corresponding scenes and the like) of the multimedia data according to the style feature vectors of the multimedia data, and grouping the multimedia data according to a set rule; multimedia data support is provided for subsequently generating personal presentations of different scenes.
In an actual usage scenario of an embodiment of the present invention, the style of each multimedia data extracted from the information distributed by the user can be identified through step S2. For multimedia data containing picture data or video data, the style can be obtained by means of image recognition. If the server recognizes that the main body of the picture is a mountain or a tree or a sea (which may contain people at the same time), the picture is regarded as a landscape picture; if the server identifies that the picture corresponds to a business scene, judging whether the picture corresponds to the working scene of the target first user or not by matching with the professional information of the target first user; similarly, the style of the video data may be obtained by recognizing the picture body of the video data setting frame. For multimedia data containing audio data, the style of the corresponding multimedia data can be obtained by identifying the scene type of the audio data. The multimedia data are identified in various ways, so that the identification accuracy can be improved.
Step S3, style feature data corresponding to the score data is obtained from the score database, and is matched with the style feature vectors of each group of grouped multimedia data.
In an embodiment of the present invention, the obtaining style characteristic data corresponding to the score data from the score database includes: and carrying out style feature extraction on the score data stored in the score database, and determining style feature data corresponding to the score data.
In an actual usage scenario of an embodiment of the present invention, the server is provided with a score database, and score data is stored in the score database. Extracting style characteristic data of the music score data, for example, scenes of the music score data can include cheerful, relaxed, youthful and the like; the scene of the score can be used as style characteristic data of the score. In one embodiment, the style characteristic data of the score data may be pre-arranged in the score database. Since the genre characteristic data of each multimedia data packet has already been acquired in step S2, the corresponding score data can be acquired from the genre characteristic data of each multimedia data packet. Through the configuration, the matching complexity can be reduced, and the matching processing efficiency is improved.
In an actual use scene, pictures and videos related to the working scene of the user are taken as a multimedia packet, and pictures and videos related to the living scene of the user are taken as a multimedia packet. The server may preset: the style characteristic of the music data corresponding to the multimedia data group related to the working scene is comfort, and the style characteristic of the music data corresponding to the multimedia data group related to the living scene is joyful. Thus, the grouping matching of the multimedia data related to the working scene to the score data of the corresponding style (score data set to the relaxing style) and the grouping matching of the multimedia data related to the living scene to the score data of the corresponding style (score data set to the cheerful style) can be performed in a simple manner.
In some embodiments of the invention, the method of the invention may further comprise the steps of: selecting score data of different styles from an existing score database to form a first style score library index I: { K1, [ M1, M2, M3, M4, …, Mn ] }; wherein, K is a style description keyword, and Mn is score data (for example, a certain piece of music) corresponding to the style description. Calculating score data Mn in a score database by using a convolutional neural network to obtain feature vectors of the score data Mn [ Fm1, Fm2 and … Fmn ]; after operation, the first style corpus index I is evolved into a vector matrix: [ [ Fm1, Fm2, … Fmn ], [ Fm1, Fm2, … Fmn ], [ Fm1, Fm2, … Fmn ], [ Fm1, Fm2, … Fmn ] ]. And carrying out vector quantization on the obtained index matrix by using a product vector algorithm to obtain a second style song library index Ip, and establishing mapping from style characteristic data to music score data. Because the second style music library index is established, the corresponding style characteristic data (such as style description keywords) of each multimedia data (corresponding score data) can be obtained through the second style music library index, and similarity search under a large-scale vector is facilitated.
Step S4, synthesizing each set of multimedia data and the score data corresponding to the style characteristic data matched with the multimedia data to form corresponding multiple sets of self-introduction contents. In one embodiment, the sets of self-introduction content may be presented to a different second user who is the same user in the social application as the first user.
In an actual use scenario of an embodiment of the present invention, a multimedia data group related to a work scenario is synthesized with music data set in a relaxing style to form a self-introduction content of the work scenario, which is provided for friends (such as colleagues, clients, and other categories or grouped friends) of a user to access; the multimedia data group related to the life scene is synthesized with the set music data of the joyful style to form the self introduction content of the life scene for the friends of the set type (such as relatives, friends and other types or grouped friends) to access.
In an embodiment of the present invention, the method further includes: and in the process of synthesizing each group of multimedia data and the music data corresponding to the style characteristic data matched with the multimedia data, if the multimedia data comprises video data and the video data comprises audio information, adjusting the audio information of the video data based on a preset rule or target user selection. The audio information may be adjusted by reducing the volume of the audio information or even turning off the audio information, so that the corresponding soundtrack is mainly played. In a practical usage scenario of the present invention, if the synthesized multimedia data includes audio video data, then the sound in the video data may be selectively reduced (since the synthesized data includes music data, the sound of the video data may be selectively reduced), the sound in the video data may be selectively turned off (only the music data is retained), and of course, the sound level of the original video data may be selectively maintained (at this time, the volume of the music data may be appropriately reduced, or even the sound of the music data may be turned off). Through the processing mode, the audio in the original video data can be reserved (the audio in the video data can be controlled to be higher or lower than the audio in the dubbing music data at the moment), the audio in the video data can be eliminated, and the independent selection right of the user can be met.
In an embodiment of the present invention, the method further includes: determining relationship information between a target first user and a second user; selecting self-introduction contents matched with the second user from the plurality of groups of self-introduction contents based on the relationship information. In a use scenario of the invention, the second user is a classmate of the target first user, and when the second user views the self-introduction information of the target first user, the server distributes the self-introduction information of the target first user corresponding to the relation of the classmate to the second user; the second user can view the corresponding self-introduction information. If the second user is a client of the target first user, when the second user views the self-introduction information of the target first user, the server distributes the self-introduction information of the target first user corresponding to the client relationship to the second user; the second user can view the corresponding self-introduction information. Through the distribution, different target first user impressions can be presented to the corresponding second users, so that the privacy of the target first users can be protected on one hand, and the second users can acquire target first user information which the second users desire to acquire more.
In an embodiment of the present invention, determining the relationship information between the target first user and the second user includes: acquiring social interaction data of the target first user and the second user; determining relationship information of the target first user and the second user based on the social interaction data. In an embodiment, the social interaction data includes data for characterizing social activities and/or social attributes of the user in the social application, including but not limited to data of social activities of the user in a group session, a single chat session, a friend circle, and the like of the social application; for example, the social interaction data of the target first user and the second user may include chat records of each other, group information of common participation, messages posted in a circle of friends, friend interaction messages in a circle of friends, and the like.
In an actual usage scenario of an embodiment of the present invention, a relationship between a target first user and a friend thereof (a second user) may be analyzed through interaction information of the target first user and the friend thereof, and a grouping category of the second user in the target first user and the friend may be referred to. By the method, the accuracy and the efficiency of obtaining the relationship between the users can be improved.
The invention also discloses a social information generating method, which is applied to second user equipment, and fig. 2 is a flow chart of the social information generating method in an embodiment of the invention; referring to fig. 2, the generating method includes:
step a1, an access request submitted by a second user in a social application of a second user device to the target first user social information is obtained, the access request including obtaining self-introduction content of the target first user.
In a use scenario of the invention, the second user can browse the WeChat friend circle information of the friend (the target first user) through the mobile phone and request to view the self-introduction content of the friend.
Step A2, the access request is sent to a server corresponding to the social application.
In a usage scenario of the present invention, the second user terminal sends a request for accessing the self-introduction content of its friend (target first user) to the server.
Step a3, obtaining self-introduction contents of the target first user matching the second user returned by the server, wherein the self-introduction contents of the target first user matching the second user are a group determined from multiple groups of self-introduction contents corresponding to the target first user.
In an embodiment of the present invention, the multiple sets of self-introductions corresponding to the target first user are obtained by the server in the following manner:
step S1, obtaining the publishing information submitted by the target first user in the social application of the first user equipment, and extracting the multimedia data from the publishing information;
step S2, identifying the multimedia data by using a neural network model, extracting corresponding style characteristic vectors, and grouping the multimedia data based on the style characteristic vectors;
step S3, obtaining style characteristic data corresponding to the score data from the score database, and matching the style characteristic data with the style characteristic vectors of each group of grouped multimedia data;
and step S4, synthesizing each group of multimedia data and the score data corresponding to the style characteristic data matched with the multimedia data to form a plurality of corresponding groups of self-introduction contents.
The specific acquisition mode can refer to the description of the above embodiments.
In an embodiment of the present invention, a group determined from the multiple groups of self-introduction contents corresponding to the target first user is: the server selects one of the plurality of self-introduction contents based on the relationship information (such as co-workers, relatives, friends and the like) of the target first user and the second user.
In an embodiment, the generating method further comprises: presenting self-introduction content of the target first user matched with the second user in a social application of the second user device.
The invention further discloses a social information generating method, and fig. 3 is a flowchart of the social information generating method according to an embodiment of the invention; referring to fig. 3, the method includes:
step 1, a server obtains published information submitted by a target first user in social application of first user equipment, and extracts multimedia data from the published information.
And 2, the server identifies the multimedia data by using a neural network model, extracts a corresponding style characteristic vector, and groups the multimedia data based on the style characteristic vector.
And 3, the server acquires style characteristic data corresponding to the score data from the score database and matches the style characteristic vectors of each group of grouped multimedia data.
And 4, synthesizing each group of multimedia data and the music matching data corresponding to the style characteristic data matched with the multimedia data by the server to form a plurality of groups of corresponding self-introduction contents.
And 5, the second user equipment acquires an access request for the social information of the target first user, which is submitted by the corresponding second user in the social application, wherein the access request comprises self-introduction content of the target first user.
And 6, the second user equipment sends the access request to the server.
And 7, the second user equipment acquires self-introduction contents of the target first user matched with the second user, which are returned by the server, wherein the self-introduction contents of the target first user matched with the second user are a group determined from the multiple groups of self-introduction contents corresponding to the target first user.
The specific processing procedures of the above steps can refer to the description of the above embodiments.
The invention discloses a device of a social information generating method, which comprises a memory for storing computer program instructions and a processor for executing the computer program instructions, wherein the computer program instructions, when executed by the processor, trigger the device to perform the method.
A computer readable medium having stored thereon computer program instructions executable by a processor to implement a method as described above is disclosed.
In summary, the social information generating method, system, device and computer readable medium provided by the present invention can match appropriate related data for different data according to different characteristics of the data to generate different synthetic data.
In one use scenario of the invention, the social application analyzes post content posted by a user in the past, extracts picture or video content with high relevance to the user, analyzes the content, groups the content according to the style of the content, generates different introduction video paragraphs, and selects proper music according to the style of the introduction video. When the friends of the user access the friend circles of the user, the system selects the introduction videos most suitable for the friends to display.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In some embodiments, the software programs of the present application may be executed by a processor to implement the above steps or functions. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The description and applications of the invention herein are illustrative and are not intended to limit the scope of the invention to the embodiments described above. Effects or advantages referred to in the embodiments may not be reflected in the embodiments due to interference of various factors, and the description of the effects or advantages is not intended to limit the embodiments. Variations and modifications of the embodiments disclosed herein are possible, and alternative and equivalent various components of the embodiments will be apparent to those skilled in the art. It will be clear to those skilled in the art that the present invention may be embodied in other forms, structures, arrangements, proportions, and with other components, materials, and parts, without departing from the spirit or essential characteristics thereof. Other variations and modifications of the embodiments disclosed herein may be made without departing from the scope and spirit of the invention.

Claims (16)

1. A social information generation method is applied to a server and is characterized by comprising the following steps:
the method comprises the steps of obtaining release information submitted by a target first user in social application of first user equipment, and extracting multimedia data from the release information;
identifying the multimedia data by using a neural network model, extracting corresponding style characteristic vectors, and grouping the multimedia data based on the style characteristic vectors;
obtaining style characteristic data corresponding to the score data from a score database, and matching the style characteristic data with style characteristic vectors of each group of grouped multimedia data;
and synthesizing each group of multimedia data and the music data corresponding to the style characteristic data matched with the multimedia data to form a plurality of groups of corresponding self-introduction contents.
2. The method of claim 1, wherein the obtaining of the post information submitted by the target first user in the social network further comprises, after extracting the multimedia data from the post information:
and screening out multimedia data meeting the user relevance from the multimedia data.
3. The social information generating method according to claim 2, wherein:
the multimedia data includes picture data and/or video data.
4. The social information generating method according to claim 3, wherein: the multimedia data satisfying the user correlation includes at least any one of:
picture data containing portrait information of the target first user;
video data with at least one frame of picture containing portrait information of the target first user;
picture data and/or video data captured by a camera of a first user device of the target first user.
5. The social information generating method according to claim 1, wherein:
the step of obtaining style characteristic data corresponding to the score data from the score database and matching the style characteristic data with the style characteristic vectors of each group of grouped multimedia data comprises the following steps: and determining style feature data matched with the style feature vectors of each group of grouped multimedia data according to the principle that the cosine distance is closest on the basis of the second style music library index of the score database.
6. The social information generating method according to claim 5, wherein:
the method further comprises the following steps: selecting score data of different styles from an existing score database to form a first style score database index;
calculating score data in a score database by using a convolutional neural network to obtain a feature vector of the score data; after operation, the first style curved bank index evolves into a vector matrix;
and carrying out vector quantization on the obtained index matrix by using a product vector algorithm to obtain a second style music library index, and establishing mapping from style characteristic data to score data.
7. The social information generating method of claim 1, further comprising:
in the process of synthesizing each group of multimedia data and the music data corresponding to the style characteristic data matched with the multimedia data, if the multimedia data comprises video data and the video data comprises audio information, the audio information in the video data is adjusted based on a preset rule or target first user selection.
8. The social information generating method of claim 1, further comprising:
determining relationship information between a target first user and a second user, wherein the second user is a user in the social application;
selecting self-introduction contents matched with the second user from the plurality of groups of self-introduction contents based on the relationship information.
9. The social information generating method according to claim 8, wherein: determining relationship information of the target first user and the second user comprises:
acquiring social interaction data of the target first user and the second user;
determining relationship information of the target first user and the second user based on the social interaction data.
10. A social information generation method is applied to a second user device, and is characterized by comprising the following steps:
obtaining an access request of a second user to the social information of a target first user, wherein the access request is submitted by the second user in a social application of second user equipment, and the access request comprises self-introduction content of the target first user;
sending the access request to a server corresponding to the social application;
and acquiring self-introduction contents of the target first user matched with the second user, which are returned by the server, wherein the self-introduction contents of the target first user matched with the second user are determined from a plurality of groups of self-introduction contents corresponding to the target first user.
11. The social information generating method of claim 10, wherein the plurality of sets of self-introductions corresponding to the target first user are obtained by the server by:
the method comprises the steps of obtaining release information submitted by a target first user in social application of first user equipment, and extracting multimedia data from the release information;
identifying the multimedia data by using a neural network model, extracting corresponding style characteristic vectors, and grouping the multimedia data based on the style characteristic vectors;
obtaining style characteristic data corresponding to the score data from a score database, and matching the style characteristic data with style characteristic vectors of each group of grouped multimedia data;
and synthesizing each group of multimedia data and the music data corresponding to the style characteristic data matched with the multimedia data to form a plurality of groups of corresponding self-introduction contents.
12. The social information generating method according to claim 10 or 11, wherein one of the groups of self-introduced contents corresponding to the target first user is:
the server selects one group from the multiple groups of self-introduction contents based on the relationship information of the target first user and the second user.
13. The social information generating method of claim 10, further comprising:
presenting self-introduction content of the target first user matched with the second user in a social application of the second user device.
14. A social information generating method, the method comprising:
the method comprises the steps that a server obtains release information submitted by a target first user in social application of first user equipment, and extracts multimedia data from the release information;
the server identifies the multimedia data by using a neural network model, extracts a corresponding style characteristic vector, and groups the multimedia data based on the style characteristic vector;
the server acquires style characteristic data corresponding to the score data from a score database, and the style characteristic data are matched with the style characteristic vectors of each group of grouped multimedia data;
the server synthesizes each group of multimedia data and the music matching data corresponding to the style characteristic data matched with the multimedia data to form a plurality of groups of corresponding self-introduction contents;
the second user equipment acquires an access request for the social information of the target first user, which is submitted by a corresponding second user in a social application, wherein the access request comprises self-introduction content of the target first user;
the second user equipment sends the access request to the server;
and the second user equipment acquires self-introduction contents of the target first user matched with the second user, which are returned by the server, wherein the self-introduction contents of the target first user matched with the second user are a group determined from the multiple groups of self-introduction contents corresponding to the target first user.
15. An apparatus of a social information generating method, the apparatus comprising a memory for storing computer program instructions and a processor for executing the computer program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform the method of any of claims 1 to 9.
16. A computer readable medium having stored thereon computer program instructions executable by a processor to implement the method of any one of claims 1 to 9.
CN202011428099.7A 2020-12-09 2020-12-09 Social information generation method, device and computer readable medium Pending CN112597320A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011428099.7A CN112597320A (en) 2020-12-09 2020-12-09 Social information generation method, device and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011428099.7A CN112597320A (en) 2020-12-09 2020-12-09 Social information generation method, device and computer readable medium

Publications (1)

Publication Number Publication Date
CN112597320A true CN112597320A (en) 2021-04-02

Family

ID=75191700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011428099.7A Pending CN112597320A (en) 2020-12-09 2020-12-09 Social information generation method, device and computer readable medium

Country Status (1)

Country Link
CN (1) CN112597320A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870133A (en) * 2021-09-27 2021-12-31 北京字节跳动网络技术有限公司 Multimedia display and matching method, device, equipment and medium
CN113923517A (en) * 2021-09-30 2022-01-11 北京搜狗科技发展有限公司 Background music generation method and device and electronic equipment

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107038201A (en) * 2016-12-26 2017-08-11 阿里巴巴集团控股有限公司 Display methods, device, terminal and the server of personal homepage
CN110489581A (en) * 2019-08-23 2019-11-22 深圳前海微众银行股份有限公司 A kind of image processing method and equipment
CN110767201A (en) * 2018-07-26 2020-02-07 Tcl集团股份有限公司 Score generation method, storage medium and terminal equipment
WO2020034849A1 (en) * 2018-08-14 2020-02-20 腾讯科技(深圳)有限公司 Music recommendation method and apparatus, and computing device and medium
CN110933354A (en) * 2019-11-18 2020-03-27 深圳传音控股股份有限公司 Customizable multi-style multimedia processing method and terminal thereof
CN110971969A (en) * 2019-12-09 2020-04-07 北京字节跳动网络技术有限公司 Video dubbing method and device, electronic equipment and computer readable storage medium
CN111369375A (en) * 2020-03-17 2020-07-03 深圳市随手金服信息科技有限公司 Social relationship determination method, device, equipment and storage medium
CN111557014A (en) * 2017-12-28 2020-08-18 连株式会社 Method and system for providing multiple personal data
CN111831615A (en) * 2020-05-28 2020-10-27 北京达佳互联信息技术有限公司 Method, device and system for generating audio-video file
CN111857901A (en) * 2019-04-29 2020-10-30 上海掌门科技有限公司 Data processing method, method for generating session background, electronic device and medium
CN111918094A (en) * 2020-06-29 2020-11-10 北京百度网讯科技有限公司 Video processing method and device, electronic equipment and storage medium
CN112015942A (en) * 2020-08-28 2020-12-01 上海掌门科技有限公司 Audio processing method and device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107038201A (en) * 2016-12-26 2017-08-11 阿里巴巴集团控股有限公司 Display methods, device, terminal and the server of personal homepage
CN111557014A (en) * 2017-12-28 2020-08-18 连株式会社 Method and system for providing multiple personal data
CN110767201A (en) * 2018-07-26 2020-02-07 Tcl集团股份有限公司 Score generation method, storage medium and terminal equipment
WO2020034849A1 (en) * 2018-08-14 2020-02-20 腾讯科技(深圳)有限公司 Music recommendation method and apparatus, and computing device and medium
CN111857901A (en) * 2019-04-29 2020-10-30 上海掌门科技有限公司 Data processing method, method for generating session background, electronic device and medium
CN110489581A (en) * 2019-08-23 2019-11-22 深圳前海微众银行股份有限公司 A kind of image processing method and equipment
CN110933354A (en) * 2019-11-18 2020-03-27 深圳传音控股股份有限公司 Customizable multi-style multimedia processing method and terminal thereof
CN110971969A (en) * 2019-12-09 2020-04-07 北京字节跳动网络技术有限公司 Video dubbing method and device, electronic equipment and computer readable storage medium
CN111369375A (en) * 2020-03-17 2020-07-03 深圳市随手金服信息科技有限公司 Social relationship determination method, device, equipment and storage medium
CN111831615A (en) * 2020-05-28 2020-10-27 北京达佳互联信息技术有限公司 Method, device and system for generating audio-video file
CN111918094A (en) * 2020-06-29 2020-11-10 北京百度网讯科技有限公司 Video processing method and device, electronic equipment and storage medium
CN112015942A (en) * 2020-08-28 2020-12-01 上海掌门科技有限公司 Audio processing method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870133A (en) * 2021-09-27 2021-12-31 北京字节跳动网络技术有限公司 Multimedia display and matching method, device, equipment and medium
CN113870133B (en) * 2021-09-27 2024-03-12 抖音视界有限公司 Multimedia display and matching method, device, equipment and medium
CN113923517A (en) * 2021-09-30 2022-01-11 北京搜狗科技发展有限公司 Background music generation method and device and electronic equipment
CN113923517B (en) * 2021-09-30 2024-05-07 北京搜狗科技发展有限公司 Background music generation method and device and electronic equipment

Similar Documents

Publication Publication Date Title
WO2022116888A1 (en) Method and device for video data processing, equipment, and medium
JP6706647B2 (en) Method and apparatus for recognition and matching of objects represented in images
US8804999B2 (en) Video recommendation system and method thereof
US9269016B2 (en) Content extracting device, content extracting method and program
US8605958B2 (en) Method and apparatus for generating meta data of content
US8107689B2 (en) Apparatus, method and computer program for processing information
US9098807B1 (en) Video content claiming classifier
CN107679249A (en) Friend recommendation method and apparatus
WO2018005701A1 (en) Video to data
WO2016184051A1 (en) Picture search method, apparatus and device, and non-volatile computer storage medium
CN112084756B (en) Conference file generation method and device and electronic equipment
US7640302B2 (en) Information delivery apparatus, information delivery method and program product therefor
CN111368141A (en) Video tag expansion method and device, computer equipment and storage medium
US8230344B2 (en) Multimedia presentation creation
CN112597320A (en) Social information generation method, device and computer readable medium
US11645331B2 (en) Searching and ranking personalized videos
CN112131431B (en) Data processing method, device and computer readable storage medium
CN110446104A (en) Method for processing video frequency, device and storage medium
CN107729543A (en) Expression picture recommends method and apparatus
JP2013054417A (en) Program, server and terminal for tagging content
CN116402049B (en) Method and device for generating decorated text set and image enhancer and electronic equipment
US11410706B2 (en) Content pushing method for display device, pushing device and display device
CN112101197A (en) Face information acquisition method and device
CN112188116B (en) Video synthesis method, client and system based on object
CN114449297B (en) Multimedia information processing method, computing device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210402