CN110674706B - Social contact method and device, electronic equipment and storage medium - Google Patents

Social contact method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110674706B
CN110674706B CN201910838034.0A CN201910838034A CN110674706B CN 110674706 B CN110674706 B CN 110674706B CN 201910838034 A CN201910838034 A CN 201910838034A CN 110674706 B CN110674706 B CN 110674706B
Authority
CN
China
Prior art keywords
information
terminal
chat
target face
face model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910838034.0A
Other languages
Chinese (zh)
Other versions
CN110674706A (en
Inventor
常向月
袁小薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuiyi Technology Co Ltd
Original Assignee
Shenzhen Zhuiyi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhuiyi Technology Co Ltd filed Critical Shenzhen Zhuiyi Technology Co Ltd
Priority to CN201910838034.0A priority Critical patent/CN110674706B/en
Publication of CN110674706A publication Critical patent/CN110674706A/en
Application granted granted Critical
Publication of CN110674706B publication Critical patent/CN110674706B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The application discloses a social contact method, a social contact device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring interaction demand information of a user corresponding to the first terminal and chat information sent by the second terminal, wherein the interaction demand information comprises at least one of social information and online behavior characteristics; determining a target face model according to the interaction demand information; obtaining expression parameters of the target face model according to the chat information; driving the expression of the target face model based on the expression parameters to obtain a target face image; and generating a chat video according to the chat information and the target face image, and outputting the chat video. The user can select the visual figure image according to the preference of the user to carry out video chat with other people, and visual communication can be realized even if the other party is not in front of the lens, so that the social experience of the user is improved.

Description

Social contact method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of electronic device technologies, and in particular, to a social method, an apparatus, an electronic device, and a storage medium.
Background
With the popularization of mobile terminal devices, mobile social users are also rapidly increasing. At present, one or more social application programs are generally installed on mobile terminals of people, and users can communicate with other people more conveniently and better through the application programs.
However, the existing social application programs often perform chatting and communication through voice and characters, wherein the chatting is performed in a voice or character mode, the mode is single, the visual interaction sense is weak, and therefore the use experience of users is reduced.
Disclosure of Invention
In view of the above problems, the present application provides a social method, an apparatus, an electronic device, and a storage medium, which can implement visual social contact and effectively improve user experience.
In a first aspect, an embodiment of the present application provides a social method, which is applied to a first terminal of a social system, where the social system further includes a second terminal that communicates with the first terminal, and the method includes:
the method comprises the steps of obtaining interaction demand information of a user corresponding to a first terminal and chat information sent by a second terminal, wherein the interaction demand information comprises at least one of social information and online behavior characteristics.
And determining a target face model according to the interaction demand information.
And obtaining the expression parameters of the target face model according to the chat information.
And driving the expression of the target face model based on the expression parameters to obtain a target face image.
And generating a chat video according to the chat information and the target face image, and outputting the chat video.
Optionally, before obtaining the interaction requirement information of the user corresponding to the first terminal and the chat information sent by the second terminal, the method further includes:
and determining that the user corresponding to the second terminal is the marked contact.
Optionally, determining the target face model according to the interaction requirement information includes:
and determining the type of the user corresponding to the second terminal, wherein the type comprises at least one of relatives, friends and colleagues.
And determining a target face model according to the type and the interaction requirement information.
Optionally, the social information includes an address book and an address record, and the determining the target face model according to the interaction demand information includes:
and extracting a plurality of pieces of first person information from the address book, and determining the intimacy between each piece of first person information and the user corresponding to the first terminal according to the address book.
First interactive personal information is determined among the plurality of first personal information according to the degree of closeness.
And acquiring a first person image corresponding to the first interactive person information.
And determining a target face model according to the first human image.
Optionally, determining the first interactive personal information from the plurality of first personal information according to the degree of closeness includes:
and respectively judging whether the intimacy between each piece of first person information and the user corresponding to the first terminal is greater than or equal to a preset intimacy.
And determining any first person information with the intimacy greater than or equal to the preset intimacy as first interactive person information.
Optionally, the online behavior feature includes an attention record, a praise record, a browsing record, and a comment record of the user corresponding to the first terminal, and the determining the target face model according to the interaction demand information includes:
and obtaining a plurality of second person information according to the attention records, and determining the attention degree of the user corresponding to the first terminal to each second person information according to the praise records, the comment records and the browsing records.
And determining second interactive personal information among the plurality of second personal information according to the attention.
And acquiring a second person image corresponding to the second interactive person information.
And determining a target human face model according to the second person image.
Optionally, determining the target face model according to the interaction requirement information includes:
and obtaining user-defined character information uploaded by a user corresponding to the first terminal.
And determining the user-defined personal information as third interactive personal information.
And acquiring a third person image corresponding to the third interactive person information.
And determining a target face model according to the third person image.
Optionally, generating a chat video according to the chat information and the target face image, and outputting the chat video, including:
and acquiring voiceprint information corresponding to the target face model.
Chat audio is generated based on the voiceprint information and the chat information.
And generating a chat video according to the chat audio and the target face image, and outputting the chat video.
In a second aspect, an embodiment of the present application provides a social method, which is applied to a second terminal of a social system, where the social system further includes a first terminal that communicates with the second terminal, and the method includes:
the method comprises the steps of obtaining interaction demand information of a user corresponding to a first terminal and chat information input by the user corresponding to a second terminal, wherein the interaction demand information comprises at least one of social information and online behavior characteristics.
And determining a target face model according to the interaction demand information.
And obtaining the expression parameters of the target face model according to the chat information.
And driving the expression of the target face model based on the expression parameters to obtain a target face image.
And generating a chat video according to the chat information and the target face image, and sending the chat video to the first terminal.
In a third aspect, an embodiment of the present application provides a social method, which is applied to a server of a social system, where the system further includes a first terminal and a second terminal that respectively communicate with the server, and the method includes:
the method comprises the steps of obtaining interaction demand information of a user corresponding to a first terminal and chat information sent by a second terminal, wherein the interaction demand information comprises at least one of social information and online behavior characteristics.
And determining a target face model according to the interaction demand information.
And obtaining the expression parameters of the target face model according to the chat information.
And driving the expression of the target face model based on the expression parameters to obtain a target face image.
And generating a chat video according to the chat information and the target face image, and sending the chat video to the first terminal.
In a fourth aspect, an embodiment of the present application provides a social device, which is applied to a first terminal of a social system, where the social system includes a second terminal that communicates with the first terminal, and the social device includes: the system comprises a first information acquisition module, a first target face model determination module, a first expression parameter acquisition module, a first target face image acquisition module and a video output module.
The first information acquisition module is used for acquiring interaction demand information of a user corresponding to the first terminal and chat information sent by the second terminal, and the interaction demand information comprises at least one of social information and online behavior characteristics.
The first target face model determining module is used for determining a target face model according to the interaction demand information.
The first expression parameter acquisition module is used for acquiring expression parameters of the target face model according to the chat information.
The first target face image acquisition module is used for driving the expression of the target face model based on the expression parameters to obtain a target face image.
And the video output module generates a chat video according to the chat information and the target face image and outputs the chat video.
In a fifth aspect, an embodiment of the present application provides a social device, which is applied to a second terminal of a social system, where the social system further includes a first terminal that communicates with the second terminal, and the social device includes:
and the second information acquisition module is used for acquiring the interaction demand information of the user corresponding to the first terminal and the chat information input by the user corresponding to the second terminal, wherein the interaction demand information comprises at least one of social information and online behavior characteristics.
And the second target face model determining module is used for determining a target face model according to the interaction demand information.
And the second expression parameter acquisition module is used for acquiring the expression parameters of the target face model according to the chat information.
And the second target face image acquisition module is used for driving the expression of the target face model based on the expression parameters to obtain a target face image.
And the first video sending module is used for generating a chat video according to the chat information and the target face image and sending the chat video to the first terminal.
In a sixth aspect, an embodiment of the present application provides a social device, which is applied to a server of a social system, where the system further includes a first terminal and a second terminal that respectively communicate with the server, and the device includes:
and the third information acquisition module is used for acquiring the interaction demand information of the user corresponding to the first terminal and the chat information sent by the second terminal, wherein the interaction demand information comprises at least one of social information and online behavior characteristics.
And the third target face model determining module is used for determining a target face model according to the interaction demand information.
And the third expression parameter acquisition module is used for acquiring the expression parameters of the target face model according to the chat information.
And the third target face image acquisition module is used for driving the expression of the target face model based on the expression parameters to obtain a target face image.
And the second video sending module is used for generating a chat video according to the chat information and the target face image and sending the chat video to the first terminal.
In a seventh aspect, an embodiment of the present application provides an electronic device, which includes: a memory; one or more processors coupled with the memory; one or more programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of the first, second, or third aspect as described above.
In an eighth aspect, embodiments of the present application provide a computer-readable storage medium, in which a program code is stored, and the program code can be called by a processor to execute the method according to the first, second, or third aspect.
According to the social contact method, the social contact device, the electronic equipment and the storage medium, when a user corresponding to the first terminal and a user corresponding to the second terminal are in social contact, interaction demand information of the user corresponding to the first terminal is obtained, and the target face model is determined according to the interaction demand information. And obtaining the chat information of the user corresponding to the second terminal, and determining the expression parameters of the target face model according to the chat information. And driving the target face model through the expression parameters to obtain a virtual target face image so as to generate a target face image meeting the user interaction requirements, and finally generating a chat video according to the chat information and the target face image and playing the chat video at the first terminal. The user corresponding to the first terminal can communicate with the user corresponding to the second terminal in the face of the video displaying the face image meeting the needs of the user, and better communication experience is guaranteed. Moreover, visual social contact can be achieved even if the user corresponding to the second terminal is not in front of the lens, visual sense of the user is well met, and user experience is further improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic diagram of an application environment suitable for the embodiment of the present application.
Fig. 2 shows a flowchart of a social method provided in an embodiment of the present application.
Fig. 3 shows a flow chart of a social method according to another embodiment of the present application.
Fig. 4 shows a flowchart of the method of step S210 in the social method provided in an embodiment of the present application.
Fig. 5 is a schematic diagram illustrating a chat interface of a first terminal according to an embodiment of the present application.
Fig. 6 is a schematic diagram illustrating a chat interface of a first terminal according to another embodiment of the present application.
Fig. 7 shows a flowchart of a social method according to another embodiment of the present application.
Fig. 8 is a flowchart illustrating a social method according to still another embodiment of the present application.
Fig. 9 shows a flowchart of the method of step S420 in the social method according to an embodiment of the present application.
Fig. 10 shows a schematic method flowchart of step S420 in the social method according to another embodiment of the present application.
Fig. 11 shows a schematic method flow diagram of step S420 in the social method provided in another embodiment of the present application.
Fig. 12 is a flowchart illustrating a social method according to yet another embodiment of the present application.
Fig. 13 is a flowchart illustrating a social method according to still another embodiment of the present application.
FIG. 14 shows a block diagram of a social device, provided in one embodiment of the present application.
Fig. 15 shows a block diagram of a social device according to another embodiment of the present application.
Fig. 16 shows a block diagram of a social device according to another embodiment of the present application.
Fig. 17 is a block diagram of an electronic device for performing a social method according to an embodiment of the present application.
Fig. 18 is a storage unit according to an embodiment of the present application, configured to store or carry program code for implementing a social method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, the popularity of mobile electronic devices such as mobile phones and the like is increasing, and smart phones have become essential personal belongings for people going out. With the rapid development of the mobile internet, various social applications are presented on the mobile terminal, and many of the social applications can chat and communicate through text and voice.
However, with the development of science and technology, the demand of people for humanized experience in the use process of various intelligent electronic products is gradually increased, and when the intelligent electronic products are communicated with other people, users also want to communicate in a single mode such as characters or voice, but also can know visual information such as expressions and appearances of the other party in the communication process.
Although some social applications now enable two or more users to conduct online video chats, enabling users to see the look and expression of the people they are communicating with. However, in the actual research process, the inventor finds that online video chat of such social application software requires that users who perform chat must be in front of the lens of their mobile phones, and that users are always communicating with the lens more tired and easily limited by light conditions, and the visual communication effect is poor under the condition of dark light. Therefore, the user is inconvenienced when chatting, and the social experience of the user is affected.
In order to solve the above problems, the inventor provides a social method, an apparatus, an electronic device, and a storage medium in the embodiments of the present application, so that a user can select a visual character to perform video chat with other people according to his/her own preferences, and visual communication can be achieved even if the other party is not in front of the lens, thereby improving the social experience of the user.
In order to better understand the social method, the social device, the electronic device, and the storage medium provided in the embodiments of the present application, an application environment suitable for the embodiments of the present application is described below.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application environment suitable for the embodiment of the present application. The social method provided by the embodiment of the application can be applied to the social system 100 shown in fig. 1. The social system may include a first terminal 101, a second terminal 102, and a server 103, where the server 103 is in communication with the first terminal 101 and the second terminal 102, respectively, where the server 103 may be a conventional server or a cloud server, and is not limited herein.
The first terminal 101 and the second terminal 102 may be various electronic devices having display screens and supporting data input, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, wearable electronic devices, and the like. Specifically, the data input may be based on a voice module provided on the first terminal 101 or the second terminal 102 to input voice, a character input module to input characters, an image input module to input images, and the like, so as to input chat information on the first terminal 101 or the second terminal 102.
Specifically, the server 103 is provided with a corresponding server application, and a user can register a user account in the server 103 based on the client application and communicate with the server 103 based on the user account, for example, the user logs in the user account in the client application and inputs the user account through the client application based on the user account, and can input text information, voice information, image information, or the like, and after receiving information input by the user, the client application can send the information to the server 103, so that the server 103 can receive the information and process and store the information, and the server 103 can also receive the information and return a corresponding output information to the first terminal 101 or the second terminal 102 according to the information.
The server 103 is provided with a device for processing information input by a user, and the device can also fuse chat information input by the user and a face model to obtain a visual chat video.
In some embodiments, the device for processing the information input by the user may also be disposed on the first terminal 101 or the second terminal 102, so that the first terminal 101 or the second terminal 102 can realize social contact and communication with the user without relying on establishing communication with the server 103, in which case the social contact may only include the first terminal 101 and the second terminal 102, and the first terminal 101 communicates with the second terminal 102.
The above application environments are only examples for facilitating understanding, and it is to be understood that the embodiments of the present application are not limited to the above application environments.
The social method, the social device, the electronic device, and the storage medium provided by the embodiments of the present application are described in detail below with specific embodiments.
Referring to fig. 2, fig. 2 is a flow chart illustrating a social method according to an embodiment of the present application. The method may be applied to a first terminal of a social system, the social system further comprising a second terminal in communication with the first terminal, the method may comprise:
step S110, acquiring interaction demand information of a user corresponding to the first terminal and chat information sent by the second terminal, wherein the interaction demand information comprises at least one of social information and online behavior characteristics.
In some embodiments, the interaction requirement information may be obtained from the first terminal or a related application of the first terminal. Specifically, the interaction requirement information may be personal information obtained from the network for tracking a close relationship with a user corresponding to the first terminal. The personal information may include a photo of the person, a video, etc. that may show the personal appearance of the person. Optionally, the interaction demand information may include social information of the user obtained from a social platform or a mobile terminal of the user, and the social information may include a mobile phone contact list of the user, a friend list in a social platform (e.g., QQ, WeChat) installed in a mobile phone of the user, and the like, and people information of the user who has daily connections with the user, such as relatives, friends, co-workers, and the like, may be tracked through the social information.
In some embodiments, the interaction requirement information may further include online behavior characteristics of the user, where the online behavior characteristics may be actions of the user such as attention, comments, and likes on a social platform, a live platform, a forum platform, and the like, and the person information that the user is interested in more may be found through the online behavior characteristics, for example, a star that the user is interested in on the social platform (such as a microblog), for example, a blogger that the user frequently browses, comments, and likes, and for example, a singer of a song that the user frequently listens to. The characters may include, but are not limited to, real characters in reality, and virtual characters, such as animations, characters in cartoons, and the like.
In other embodiments, the interaction requirement information of the user may be user-defined uploaded personal information which may not be found on the network, and the user may upload the personal information, such as a photo, a video and the like of the person, by himself through the terminal device.
The chat information includes, but is not limited to, various types of information such as voice information, text information, image information, and motion information. The voice information may include audio information of a language class (e.g., chinese, english audio, etc.) and audio information of a non-language class (e.g., music audio, etc.); the text information may include text information of a character class (e.g., chinese, english, etc.) and text information of a non-character class (e.g., special symbols, character expressions, etc.); the image information may include still image information (e.g., still pictures, photographs, etc.) as well as moving image information (e.g., moving pictures, video images, etc.); the motion information may include user motion information (e.g., user gestures, body motions, expressive motions, etc.) as well as terminal motion information (e.g., position, attitude, and motion state of the terminal device such as shaking, rotation, etc.).
It can be understood that information collection can be performed through different types of information input modules on the terminal device corresponding to different types of chat information. For example, voice information of a user may be collected through an audio input device such as a microphone, text information input by the user may be collected through a touch screen or a physical key, image information may be collected through a camera, and motion information may be collected through an optical sensor, a gravity sensor, or the like.
In some embodiments, the step of acquiring the interaction requirement information of the user corresponding to the first terminal (hereinafter, may be referred to as the first user) may be performed before the step of acquiring the chat information sent by the second terminal, or may be performed after the step of acquiring the chat information sent by the second terminal, and may also be performed simultaneously with the step of acquiring the chat information sent by the second terminal.
And step S120, determining a target face model according to the interaction demand information.
In some embodiments, since the personal information closely related to the user corresponding to the first terminal can be tracked through the interaction requirement information, a picture or a video related to the personal information can be obtained as the face image sample, for example, the interaction requirement information is an address book on the first terminal, the address book can be tracked to the parent of the user corresponding to the first terminal, and then the picture or the video related to the parent can be obtained from a network or an album of the first terminal as the face image sample. And inputting the face image sample into a machine learning model for training so as to obtain a target face model. Specifically, a specific face model may be pre-established, the specific face model may be constructed based on an average face and a face image sample, the average face may be a basic face model for three-dimensional face modeling, the face image may be a face image obtained according to interaction requirement information of a user corresponding to the first terminal, and the face image may be a picture or a video. By acquiring an average face and a face image, a three-dimensional model of the face, namely a specific face model, can be reconstructed through a two-dimensional face image (face image) based on a face reconstruction technology in 3 DMM. The face image sample may include a plurality of face images.
It can be understood that the established specific face model is very similar to a real face to be simulated in external outline and form, but in order to enable the simulated face model image to replace the real face, details such as skin material and the like need to be supplemented on the specific face model, that is, the face material in the face image can be replaced to the specific face model through material rendering to obtain the target face model. In this embodiment, the face skin material of the real face to be simulated may be extracted from the face image, and then the extracted face material may be replaced with the specific face model by the material based on the texture mapping technique, so as to obtain the target face model that can replace the real face from the surface morphology structure and the skin details.
And step S130, obtaining the expression parameters of the target face model according to the chat information.
In some embodiments, the chat information may be input into a visual prediction model, and the visual prediction model is used to obtain the expression parameters of the target face model corresponding to the chat information. The visual prediction model can be obtained by training a neural network based on a large number of real person speaking videos (including real person speaking images and real person speaking audios corresponding to the real person speaking images) and corresponding training samples of facial model expression parameters. It is understood that the visual prediction model is a model for converting audio into corresponding facial model expression parameters. And inputting the obtained chat information into the prediction model, and outputting the expression parameters of the target face model by the visual prediction model. It can be known from the above description that the chat information may be voice information or text information, and when the chat information is voice information, the chat information may be directly input to the visual prediction model, and when the chat information is text information, the text information may be first converted into voice information and then input to the visual prediction model, and how to convert the text information into voice information is not described here any more.
In this embodiment, the expression parameters of the target face model may be a series of expression parameters for adjusting the target face model. The target Face model may be a three-dimensional Face model manufactured by a 3D Face modeling (3D Face mobile Models) technology based on a 3D mobile Models (3D mobile Models), and details of the target Face model may be similar to a human Face. It can be understood that, in this embodiment, the obtained expression parameters of the target face model are a plurality of sets of parameter sequences corresponding to changes in time, and the expression parameters of each set of target face model correspond to a set of preset three-dimensional model key points of the face model, which correspond to the chat information in time.
Step S140, the expression of the target face model is driven based on the expression parameters to obtain a target face image.
In some embodiments, the first terminal may drive an expression of the target face model based on the expression parameter, and may obtain a target face image through the driving, where a face expression in the target face image is determined by the expression parameter.
And step S150, generating a chat video according to the chat information and the target face image, and outputting the chat video.
In some embodiments, a corresponding chat video may be generated according to the chat information and the emotionally driven face image (i.e., the target face image), and the chat video may be played on the display screen of the first terminal. Specifically, the first terminal is provided with a social application program, and the social application program can drive the expression of the target face model according to the expression parameters. Wherein, the chat information in the chat video corresponds to and is synchronized with the target face image, for example, when the chat information at a certain time in the chat video is "a | ]! "when the target face image in the chat video corresponds to a surprise expression. Wherein, the chat message comprises "o! "may be a word or a voice.
In the social method in this embodiment, when a user corresponding to a first terminal (hereinafter, may be referred to as a first user) and a user corresponding to a second terminal (hereinafter, may be referred to as a second user) perform social contact, interaction requirement information of the user corresponding to the first terminal is obtained, and a target face model is determined according to the interaction requirement information. And obtaining the chat information of the user corresponding to the second terminal, and determining the expression parameters of the target face model according to the chat information. And driving the target face model through the expression parameters to obtain a virtual target face image so as to generate a target face image meeting the user interaction requirements, and finally generating a chat video according to the chat information and the target face image and playing the chat video at the first terminal. The user corresponding to the first terminal can communicate with the user corresponding to the second terminal in the face of the video displaying the face image meeting the needs of the user, and does not feel strange even if the user conducts video chat with unfamiliar people, so that better communication experience is guaranteed. Moreover, visual social contact can be achieved even if the user corresponding to the second terminal is not in front of the lens, visual sense of the user is well met, and user experience is further improved. In addition, the video chat is not limited by environmental conditions, and the video chat can be performed even under the condition of poor light.
Referring to fig. 3, fig. 3 is a flow chart illustrating a social method according to another embodiment of the present application. The social method of the embodiment may include:
and step S210, determining that the user corresponding to the second terminal is the marked contact.
In some embodiments, the first terminal may determine that the user corresponding to the second terminal is a tagged contact. The marked contact person is the contact person needing to display a specific face image according to the interaction requirement information of the user corresponding to the first terminal during video chat. For example, a friend of the user corresponding to the first terminal is marked as a marked contact in advance, and when it is determined that the user corresponding to the second terminal is a friend of the user corresponding to the first terminal, a subsequent step of generating a chat video according to the interaction demand information is performed.
In some embodiments, as shown in fig. 4, step S210 may include:
step S211, determining whether the user corresponding to the second terminal is a marked contact.
As an example, a boss or ancestor of the first user may be an untagged contact, and friends, brothers, and strangers of the first user may be tagged contacts.
Step S212, when the user corresponding to the second terminal is not the marked contact, the chat information sent by the second terminal is obtained, a default chat video is generated according to the chat information and a preset default face model, and the default chat video is output.
In some embodiments, as shown in fig. 5, when the user corresponding to the second terminal is not a tagged contact, for example, the second user is the boss or elder of the first user, it is not suitable to determine the facial image in the chat video through the interaction requirement information. Therefore, the default face model and the chat information can be selected to generate a default chat video, and the default chat video is output at the first terminal. The default face model can be a more formal or serious face model, so that the first user can be ensured to keep serious or formal attitude when the first user conducts video chat.
Step S213, when the user corresponding to the second terminal is the marked contact, acquiring the interaction demand information of the user corresponding to the first terminal and the chat information sent by the second terminal, wherein the interaction demand information comprises at least one of social information and online behavior characteristics.
When the second user is a labeled contact, for example, the second user is a friend, a brother or a stranger of the user, the interaction requirement information of the first user and the chat information sent by the second terminal can be obtained.
Step S220, acquiring interaction demand information of a user corresponding to the first terminal and chat information sent by the second terminal, wherein the interaction demand information comprises at least one of social information and online behavior characteristics.
And step S230, determining a target face model according to the interaction demand information.
Step S240, the expression parameters of the target face model are obtained according to the chat information.
And step S250, driving the expression of the target face model based on the expression parameters to obtain a target face image.
And step S260, generating a chat video according to the chat information and the target face image, and outputting the chat video.
In this embodiment, it is considered that a user (i.e., a chat object) corresponding to the second terminal may not be suitable for performing a video chat displaying a special face according to the interaction requirement information of the first user, and therefore before performing the video chat according to the interaction requirement information, it may be determined that the user corresponding to the second terminal is a marked contact, and then whether to determine the target face image in the chat video according to the interaction requirement information of the first user may be determined. The face image display method enables the face image to be displayed properly when the face faces different chat objects are in video chat, and therefore the experience of a user in chatting videos is improved. As shown in fig. 6, for example, when the second user is a friend, a brother, or the like of the first user as a marked contact, the target face image in the chat video may be determined according to the interaction requirement information, for example, when the video chat shows that the face of a cartoon character liked by the first user is displayed on the first terminal to replace the face of the marked contact, for example, the chat video displaying the face of the cartoon character is used to communicate with the second user, thereby improving the interest of social contact. For another example, when the second user is a stranger who is not familiar to the first user and is a marked contact, the target face image in the chat video can be determined according to the interaction demand information, for example, a face of a person who is relatively close to the first user or a cartoon face is displayed on the first terminal instead of the face of the marked contact when the video chat is performed, so that the first user does not feel strange and embarrassing during communication, and the user experience is improved.
Referring to fig. 7, fig. 7 is a flow chart illustrating a social method according to another embodiment of the present application. The social method of the embodiment may include:
step S310, obtaining interaction demand information of a user corresponding to the first terminal and chat information sent by the second terminal, wherein the interaction demand information comprises at least one of social information and online behavior characteristics.
Step S320, determining a type of the user corresponding to the second terminal, where the type includes at least one of relatives, friends, and colleagues.
In some embodiments, the first terminal may view a classification of the second user in advance from a local address book or a friend list of a social platform installed on the first terminal, so as to determine the type of the second user.
And step S330, determining a target face model according to the type and the interaction demand information.
In some embodiments, the target face model may be determined according to a type of the second user and interaction requirement information corresponding to the type. For example, if the type of the second user is a parent, the interaction demand information may be an address book on the first terminal, and the address book may track the parent of the user corresponding to the first terminal, so that a photo or a video of the parent may be obtained, and a target face model may be generated according to the photo or the video. For another example, if the type of the second user is a colleague, the interaction requirement information may be a contact list on office software (e.g., nail) or instant messaging software (e.g., QQ, wechat) on the first terminal, and the contact list may track the colleague of the first terminal corresponding to the user, so that a photo or video of a person of the colleague may be obtained to generate the target face model according to the photo or video.
And step S340, obtaining the expression parameters of the target face model according to the chat information.
And step S350, driving the expression of the target face model based on the expression parameters to obtain a target face image.
And step S360, generating a chat video according to the chat information and the target face image, and outputting the chat video.
In the implementation, by judging the type of the user corresponding to the second terminal and determining the target face model according to the type and the interaction demand information, a more matched target face model can be recommended for the user corresponding to the first terminal, so that the user experience in video chat is improved.
Referring to fig. 8, fig. 8 is a flow chart illustrating a social method according to still another embodiment of the present application. The social method of the embodiment may include:
step S410, acquiring interaction demand information of a user corresponding to the first terminal and chat information sent by the second terminal, wherein the interaction demand information comprises at least one of social information and online behavior characteristics.
And step S420, determining a target face model according to the interaction demand information.
In some embodiments, the social information includes an address book and an address record, as shown in fig. 9, step S420 includes:
step S421A, extracting a plurality of first person information from the address book, and determining, according to the address record, an affinity between each first person information and the user corresponding to the first terminal.
In some embodiments, the number of times of communication or the duration of the call between each person in the address book and the user can be compared to determine the affinity between each person and the user. Generally, the people who communicate with the user for a large number of times have a high affinity with the user, and the people who have a long time to talk with the user each time have a high affinity with the user. Therefore, when calculating the intimacy degree, the communication number may be directly recorded, for example, if the number of communication between a certain person in the address book and the user within one month is 50, the intimacy degree between the user and the person may be determined to be 50, if the number of communication is 40, if the intimacy degree is calculated by taking 1-point intimacy degree for each communication, the intimacy degree between the user and the person may be 40. The intimacy can also be determined directly according to the call duration, for example, the total call duration between the user and a certain person in one month is 300 minutes, and if the intimacy is calculated as 1 point per 10 minutes, the intimacy is 30. In the embodiment, the intimacy degree is calculated through the number of times of communication or the length of communication time, so that the intimacy degree between the person and the user in the address book can be conveniently and quickly determined and calculated.
In some embodiments, affinity may be calculated by combining the number of communications and the duration of the call. The communication times and the call duration may be respectively weighted by 50%. For example, if the number of calls made with a person in a month is 40 and the duration of the call is 500 minutes, the intimacy between the person and the person is 40 × 50% +500 × 10% × 50% + 45. In the embodiment, the intimacy degree is calculated by combining the communication times and the call duration, so that the intimacy degree can reflect the relationship between the person and the user more truly.
In step S422A, first interactive personal information is determined among the plurality of first personal information according to the degree of closeness.
In some embodiments, step S422A may include: respectively judging whether the intimacy between each piece of first person information and the user corresponding to the first terminal is greater than or equal to a preset intimacy; and determining any first person information with the intimacy greater than or equal to the preset intimacy as first interactive person information. For example, the preset intimacy degree is 50, a plurality of people with intimacy degree greater than or equal to 50 with the user in the address book are provided, one person can be randomly selected from the plurality of people, and the person information of the person can be used as the first interactive person information. To ensure that the preset intimacy level does not exceed the maximum intimacy level, the preset intimacy level may be set to 80% of the maximum intimacy level. In the embodiment, any one of the character information with the intimacy greater than the preset intimacy in the address book is selected as the first interactive character information, so that the selected character and the user can have higher intimacy, the first interactive character information has certain randomness, and the user can be guaranteed to keep certain freshness during video chat.
In other embodiments, step S422A may include: comparing the degree of intimacy between each first person information and the user; and determining the first person information with the greatest intimacy with the user as the first interactive person information. In the embodiment, the character information with the highest intimacy degree is used as the first interactive character information, so that the user can communicate with the most familiar face image when the user is in video chat, and further the intimacy of the user when in video chat is ensured.
In step S423A, a first person image corresponding to the first interactive personal information is acquired.
In some embodiments, the first terminal may search and acquire a character image corresponding to the first interactive character information from the third-party platform according to the first interactive character information, or may acquire a first character image corresponding to the first interactive character information, which is stored on the first terminal by the user, from the first terminal of the user. For example, if the first interactive personal information is the mother of the user corresponding to the first terminal, a photo or a video about the mother may be searched and obtained from a local album of the first terminal to obtain the first personal image. The character image includes at least one of a still picture, a moving picture and a video.
In step S424A, a target face model is determined according to the first human image.
In some embodiments, the online behavior feature includes an attention record, an approval record, a browsing record, and a comment record of the user corresponding to the first terminal, as shown in fig. 10, step S420 may include:
step S421B, obtaining a plurality of second personal information according to the attention records, and determining the attention degree of the user corresponding to the first terminal to each of the second personal information according to the comment records, and the browsing records.
The attention records can be a person list concerned by the users on the social platform, the forum platform and the music platform, and the person list records attention persons of a plurality of users. The like record, the browsing record and the comment record may be the number of times of like, browsing times and comment times, etc. recorded by the user for the person concerned in a certain time, and alternatively, if the person concerned is a singer, the browsing record may include the number of times of playing of the song by the user for the singer.
In some embodiments, the attention degree of the user to the person concerned by the user can be calculated by one of the number of times of praise, the number of times of comment and the number of times of browsing of the person concerned by the user in a certain time. For example, if the number of times of comments made by the user on a certain person of interest in one month is 50 and the degree of attention of the user at 1 point at a time is calculated, the degree of attention of the user on the person of interest is 50.
In some embodiments, the attention degree of a user to a certain attention person can be calculated by combining the number of times of praise, the number of times of comment and the number of times of browsing of the user to the certain attention person. For example, the number of votes, the number of comments, and the number of views each account for 30% of the attention degree weight, and each of the number of votes, the number of comments, and the number of views is 1 attention degree, and if the number of votes to a certain person of interest by the user in one month is 60, the number of comments is 30, and the number of views is 90, the attention degree of the person of interest by the user is 30 +60 + 30 +90 + 30%, 54. In the embodiment, the attention degree is calculated by combining the praise record, the comment record and the browsing record of the user, so that the accuracy and the authenticity of the attention degree can be ensured.
In step S422B, the second interactive personal information is determined among the plurality of second personal information according to the degree of attention.
In some embodiments, step S422B may include: respectively judging whether the attention of the user to each piece of second personal information is greater than or equal to a preset attention; and determining any second person information with the attention degree larger than or equal to the preset attention degree as second interactive person information. For example, the preset attention degree is 50, and there are a plurality of people the user pays attention to, and one person may be randomly selected from the plurality of persons, and the personal information of the person may be used as the second interactive personal information. To ensure that the preset attention does not exceed the maximum attention, the preset attention may be set to 80% of the maximum attention. In the embodiment, any one of the character information with the attention degree greater than the preset attention degree is selected as the second interactive character information, so that the user can be ensured to have higher attention degree on the interactive character, the second interactive character information has certain randomness, and the user can be ensured to keep certain freshness during video chat.
In other embodiments, step S422B may include: comparing the attention degree of the user to each piece of second person information; and determining the second personal information with the maximum attention as the second interactive personal information.
In the embodiment, the character information most concerned by the user is selected as the second interactive character information, and people often have good feeling to the concerned people, so that the user can perform video chat on the face image which is good feeling to the first user when interacting, and the experience feeling of the user can be improved.
In step S423B, a second person image corresponding to the second interactive personal information is acquired.
In step S424B, a target face model is determined according to the second person image.
In this embodiment, the second interactive person information is determined by calculating the attention degree of the user to the person, so that the interactive person can be ensured to be a person that the user likes, such as an actor, a singer, a figure, and the like that the user likes, and therefore, the user can feel good about the target face image generated according to the second interactive person information when the video chat is performed, thereby improving the social experience of the user.
In other embodiments, as shown in fig. 11, step S420 includes:
step S421C, obtaining the user-defined personal information uploaded by the user corresponding to the first terminal.
The user-defined character information can be a real character or a virtual character.
In step S422C, the custom personal information is determined as the third interactive personal information.
Wherein the third interactive personal information is similar to the first interactive personal information and the second personal interactive information, respectively.
In step S423C, a third person image corresponding to the third interactive personal information is acquired.
The third character image is similar to the first interactive character image and the second character interactive image respectively.
In step S424C, a target face model is determined according to the third person image.
In the embodiment, the user can upload the user-defined character information according to the preference of the user, so that the target face image in the video chat is generated, and the flexibility and the freedom of the user in the video chat are ensured. Thereby further improving the user experience.
And step S430, obtaining the expression parameters of the target face model according to the chat information.
Step S440, the expression of the target face model is driven based on the expression parameters, and a target face image is obtained.
Step S450, obtaining voiceprint information corresponding to the target face model.
In some embodiments, if the interactive character is in the user address book, the voice print information of the character can be extracted from the call recording of the user and the character. If the interactive character is a public character or singer, the voice print information of the character can be extracted from the recording or the song of the singer disclosed by the public character.
Step S460, generating a chat audio based on the voiceprint information and the chat information.
In some embodiments, the chat audio corresponding to the voiceprint information can be obtained by inputting the voiceprint information into a pre-trained model. Wherein the pre-trained model is pre-trained by sample voiceprint information and sample chat audio. Wherein the sample chat audio can be extracted from a video or a recording.
Step S470 is to generate a chat video according to the chat audio and the target face image, and output the chat video.
In some embodiments, the target face images are multiple, and the multiple target face images can be played in a certain sequence, so that a visual video is obtained, and then the chat audio and the visual video are synchronized, so that a chat video can be generated and output at the first terminal.
In some embodiments, the first user may manually switch the target face model on the first terminal, for example, switch between the target face model corresponding to the first character image, the target face model corresponding to the second character image, and the target face model corresponding to the third character image.
In this embodiment, by acquiring the voiceprint information corresponding to the interactive character information and generating the chat audio according to the voiceprint information, the chat audio is used for realizing the communication between the first user and the second user, so that not only can the user visually communicate with the target face image familiar or liked by the user, but also the user can communicate with the familiar or liked sound auditorily, and the interactive experience of the user is further improved.
Referring to fig. 12, fig. 12 is a flow chart illustrating a social method according to yet another embodiment of the present application. The method can be applied to a second terminal of a social system, the social system further comprises a first terminal which is communicated with the second terminal, and the method comprises the following steps:
step S510, obtaining interaction requirement information of a user corresponding to the first terminal and chat information input by the user corresponding to the second terminal, where the interaction requirement information includes at least one of social information and online behavior characteristics.
And step S520, determining a target face model according to the interaction demand information.
Step S530, the expression parameters of the target face model are obtained according to the chat information.
And step S540, driving the expression of the target face model based on the expression parameters to obtain a target face image.
And step S550, generating a chat video according to the chat information and the target face image, and sending the chat video to the first terminal.
Referring to fig. 13, fig. 13 is a flow chart illustrating a social method according to still another embodiment of the present application. The method can be applied to a server of a social system, the system further comprises a first terminal and a second terminal which are respectively communicated with the server, and the method comprises the following steps:
step S610, obtaining interaction demand information of a user corresponding to the first terminal and chat information sent by the second terminal, wherein the interaction demand information comprises at least one of social information and online behavior characteristics.
And S620, determining a target face model according to the interaction demand information.
And step S630, obtaining the expression parameters of the target face model according to the chat information.
And step S640, driving the expression of the target face model based on the expression parameters to obtain a target face image.
And step S650, generating a chat video according to the chat information and the target face image, and sending the chat video to the first terminal.
Referring to fig. 14, fig. 14 is a block diagram illustrating a social device according to an embodiment of the present disclosure. The apparatus 400 is applied to a first terminal of a social system including a second terminal communicating with the first terminal, the apparatus 400 including: a first information obtaining module 410, a first target face model determining module 420, a first expression parameter obtaining module 430, a first target face image obtaining module 440, and a video output module 450.
The first information obtaining module 410 is configured to obtain interaction requirement information of a user corresponding to the first terminal and chat information sent by the second terminal, where the interaction requirement information includes at least one of social information and online behavior characteristics.
The first target face model determining module 420 is configured to determine a target face model according to the interaction requirement information.
The first expression parameter obtaining module 430 is configured to obtain expression parameters of the target face model according to the chat information.
The first target face image obtaining module 440 is configured to drive an expression of the target face model based on the expression parameter, so as to obtain a target face image.
The video output module 450 is configured to generate a chat video according to the chat information and the target face image, and output the chat video.
Further, the apparatus further includes a contact tag determining module, where the contact tag determining module is configured to determine that a user corresponding to the second terminal is a tagged contact.
Further, the first target face model is specifically used for determining the type of the user corresponding to the second terminal, and the type includes at least one of relatives, friends and colleagues; and determining a target face model according to the type and the interaction requirement information.
Further, the social information comprises an address book and an address record, the first target face model is specifically used for extracting a plurality of first person information from the address book, and determining the intimacy between each first person information and the user corresponding to the first terminal according to the address record; determining first interactive personal information in the plurality of first personal information according to the intimacy; acquiring a first person image corresponding to the first interactive person information; and determining a target face model according to the first human image.
Further, determining first interactive personal information among the plurality of first personal information according to the intimacy degree includes:
and respectively judging whether the intimacy between each piece of first person information and the user corresponding to the first terminal is greater than or equal to a preset intimacy.
And determining any first person information with the intimacy greater than or equal to the preset intimacy as first interactive person information.
Further, the online behavior characteristics comprise an attention record, a praise record, a browsing record and a review record of a user corresponding to the first terminal, the first target face model is specifically used for acquiring a plurality of pieces of second person information according to the attention record, and the attention degree of the user corresponding to the first terminal to each piece of second person information is determined according to the praise record, the review record and the browsing record; determining second interactive personal information in the plurality of second personal information according to the attention degree; acquiring a second character image corresponding to the second interactive character information; and determining a target human face model according to the second person image.
Further, the first target face model is specifically used for acquiring user-defined character information uploaded by a user corresponding to the first terminal; determining the user-defined character information as third interactive character information; acquiring a third person image corresponding to the third interactive person information; and determining a target face model according to the third person image.
Further, the video output module 450 is specifically configured to obtain voiceprint information corresponding to the target face model; generating chat audio based on the voiceprint information and the chat information; and generating a chat video according to the chat audio and the target face image, and outputting the chat video.
Referring to fig. 15, fig. 15 is a block diagram illustrating a social device according to another embodiment of the present application. The apparatus 500 is applied to a second terminal of a social system, the social system further includes a first terminal communicating with the second terminal, the apparatus 500 includes:
the second information obtaining module 510 is configured to obtain interaction requirement information of a user corresponding to the first terminal and chat information input by the user corresponding to the second terminal, where the interaction requirement information includes at least one of social information and online behavior characteristics.
And a second target face model determining module 520, configured to determine a target face model according to the interaction requirement information.
And a second expression parameter obtaining module 530, configured to obtain an expression parameter of the target face model according to the chat information.
And a second target face image obtaining module 540, configured to drive an expression of the target face model based on the expression parameter, so as to obtain a target face image.
And a first video sending module 550, configured to generate a chat video according to the chat information and the target face image, and send the chat video to the first terminal.
Referring to fig. 16, fig. 16 is a block diagram illustrating a social device according to still another embodiment of the present application. The device 600 is applied to a server of a social system, the system further comprises a first terminal and a second terminal which are respectively communicated with the server, the device 600 comprises:
the third information obtaining module 610 is configured to obtain interaction requirement information of a user corresponding to the first terminal and chat information sent by the second terminal, where the interaction requirement information includes at least one of social information and online behavior characteristics.
And a third target face model determining module 620, configured to determine a target face model according to the interaction requirement information.
And a third expression parameter obtaining module 630, configured to obtain an expression parameter of the target face model according to the chat information.
And a third target face image obtaining module 640, configured to drive an expression of the target face model based on the expression parameter, so as to obtain a target face image.
And the second video sending module 650 is configured to generate a chat video according to the chat information and the target face image, and send the chat video to the first terminal.
The social device provided in the embodiment of the present application is used to implement the corresponding social method in the foregoing method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again.
As will be clearly understood by those skilled in the art, the social device provided in the embodiment of the present application can implement each process in the foregoing method embodiments, and for convenience and brevity of description, the specific working processes of the device and the module described above may refer to corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be in an electrical, mechanical or other form.
In addition, each functional module in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 17, a block diagram of an electronic device 700 according to an embodiment of the present disclosure is shown. The electronic device 700 may be a smart phone, a tablet computer, or other electronic device capable of running an application. The electronic device 700 in the present application may include one or more of the following components: a processor 710, a memory 720, and one or more applications, wherein the one or more applications may be stored in the memory 720 and configured to be executed by the one or more processors 710, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 710 may include one or more processing cores. The processor 710 interfaces with various components throughout the electronic device 700 using various interfaces and circuitry to perform various functions of the electronic device 600 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 720 and invoking data stored in the memory 720. Alternatively, the processor 710 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 710 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 710, but may be implemented by a communication chip.
The Memory 720 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 720 may be used to store instructions, programs, code sets, or instruction sets. The memory 720 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The data storage area may also store data created by the electronic device 700 during use (e.g., phone books, audio-visual data, chat log data), and the like.
Referring to fig. 18, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable storage medium 800 has stored therein program code that can be invoked by a processor to perform the methods described in the method embodiments above.
The computer-readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 800 includes a non-volatile computer-readable storage medium. The computer readable storage medium 800 has storage space for program code 810 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 810 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (14)

1. A social method applied to a first terminal of a social system, the social system further comprising a second terminal communicating with the first terminal, the method comprising:
determining whether a user corresponding to the second terminal is a marked contact;
when the user corresponding to the second terminal is not the marked contact, chat information sent by the second terminal is obtained, a default chat video is generated according to the chat information and the default face model, and the default chat video is output, wherein the default face model is a formal or serious face model;
when the user corresponding to the second terminal is the marked contact, acquiring interaction demand information of the user corresponding to the first terminal and chat information sent by the second terminal, wherein the interaction demand information comprises at least one of social information and online behavior characteristics;
determining a target face model according to the interaction demand information;
obtaining expression parameters of the target face model according to the chat information, wherein the expression parameters correspond to the chat information in time;
driving the expression of the target face model based on the expression parameters to obtain a target face image;
and generating a chat video according to the chat information and the target face image, and outputting the chat video.
2. The method of claim 1, wherein the determining a target face model according to the interaction requirement information comprises:
determining the type of a user corresponding to the second terminal, wherein the type comprises at least one of relatives, friends and colleagues;
and determining the target face model according to the type and the interaction demand information.
3. The method of claim 1, wherein the social information comprises an address book and an address record, and the determining the target face model according to the interaction requirement information comprises
Extracting a plurality of first person information from the address book, and determining the intimacy between each first person information and the user corresponding to the first terminal according to the address record;
determining first interactive personal information in a plurality of first personal information according to the intimacy;
acquiring a first person image corresponding to the first interactive person information;
and determining the target face model according to the first human image.
4. The method of claim 3, wherein determining first interactive personal information among the plurality of first personal information according to the affinity comprises:
respectively judging whether the intimacy between each piece of first person information and the user corresponding to the first terminal is greater than or equal to a preset intimacy;
and determining any first person information with the intimacy greater than or equal to a preset intimacy as the first interactive person information.
5. The method according to claim 1, wherein the online behavior characteristics include an attention record, a praise record, a browsing record, and a comment record of a user corresponding to the first terminal, and the determining a target face model according to the interaction demand information includes:
acquiring a plurality of second person information according to the attention records, and determining the attention degree of the user corresponding to the first terminal to each second person information according to the like record, the comment record and the browsing record;
determining second interactive personal information in the plurality of second personal information according to the attention degree;
acquiring a second character image corresponding to the second interactive character information;
and determining the target human face model according to the second person image.
6. The method of claim 1, wherein the determining a target face model according to the interaction requirement information comprises:
obtaining user-defined character information uploaded by a user corresponding to the first terminal;
determining the user-defined character information as third interactive character information;
acquiring a third character image corresponding to the third interactive character information;
and determining the target human face model according to the third person image.
7. The method according to any one of claims 1 to 6, wherein the generating a chat video according to the chat information and the target face image and outputting the chat video comprises:
acquiring voiceprint information corresponding to the target face model;
generating chat audio based on the voiceprint information and the chat information;
and generating a chat video according to the chat audio and the target face image, and outputting the chat video.
8. A social method applied to a second terminal of a social system, the social system further including a first terminal communicating with the second terminal, the method comprising:
determining whether a user corresponding to the second terminal is a marked contact;
when the user corresponding to the second terminal is not the marked contact, chat information sent by the second terminal is obtained, a default chat video is generated according to the chat information and the default face model, and the default chat video is output, wherein the default face model is a formal or serious face model;
when the user corresponding to the second terminal is the marked contact, acquiring interaction demand information of the user corresponding to the first terminal and chat information input by the user corresponding to the second terminal, wherein the interaction demand information comprises at least one of social information and online behavior characteristics;
determining a target face model according to the interaction demand information;
obtaining expression parameters of the target face model according to the chat information, wherein the expression parameters correspond to the chat information in time;
driving the expression of the target face model based on the expression parameters to obtain a target face image;
and generating a chat video according to the chat information and the target face image, and sending the chat video to the first terminal.
9. A social method applied to a server of a social system, the system further including a first terminal and a second terminal respectively communicating with the server, the method comprising:
determining whether a user corresponding to the second terminal is a marked contact;
when the user corresponding to the second terminal is not the marked contact, chat information sent by the second terminal is obtained, a default chat video is generated according to the chat information and the default face model, and the default chat video is output, wherein the default face model is a formal or serious face model;
when the user corresponding to the second terminal is the marked contact, acquiring interaction demand information of the user corresponding to the first terminal and chat information sent by the second terminal, wherein the interaction demand information comprises at least one of social information and online behavior characteristics;
determining a target face model according to the interaction demand information;
obtaining expression parameters of the target face model according to the chat information, wherein the expression parameters correspond to the chat information in time;
driving the expression of the target face model based on the expression parameters to obtain a target face image;
and generating a chat video according to the chat information and the target face image, and sending the chat video to the first terminal.
10. A social device applied to a first terminal of a social system including a second terminal communicating with the first terminal, the device comprising:
the contact person marking determining module is used for determining whether the user corresponding to the second terminal is a marked contact person or not; when the user corresponding to the second terminal is not the marked contact, chat information sent by the second terminal is obtained, a default chat video is generated according to the chat information and the default face model, and the default chat video is output, wherein the default face model is a formal or serious face model;
the first information acquisition module is used for acquiring interaction demand information of a user corresponding to the first terminal and chat information sent by the second terminal when the user corresponding to the second terminal is a marked contact, wherein the interaction demand information comprises at least one of social information and online behavior characteristics;
the first target face model determining module is used for determining a target face model according to the interaction demand information when the user corresponding to the second terminal is a marked contact;
a first expression parameter obtaining module, configured to obtain an expression parameter of the target face model according to the chat information, where the expression parameter corresponds to the chat information in terms of time;
the first target face image acquisition module is used for driving the expression of the target face model based on the expression parameters to obtain a target face image;
and the video output module is used for generating a chat video according to the chat information and the target face image and outputting the chat video.
11. A social device applied to a second terminal of a social system, the social system further including a first terminal communicating with the second terminal, the device comprising:
the contact person marking determining module is used for determining whether the user corresponding to the second terminal is a marked contact person or not; when the user corresponding to the second terminal is not the marked contact, chat information sent by the second terminal is obtained, a default chat video is generated according to the chat information and the default face model, and the default chat video is output, wherein the default face model is a formal or serious face model;
the second information acquisition module is used for acquiring interaction demand information of a user corresponding to the first terminal and chat information input by the user corresponding to the second terminal when the user corresponding to the second terminal is a marked contact, wherein the interaction demand information comprises at least one of social information and online behavior characteristics;
the second target face model determining module is used for determining a target face model according to the interaction demand information;
the second expression parameter acquisition module is used for acquiring expression parameters of the target face model according to the chat information, wherein the expression parameters correspond to the chat information in terms of time;
the second target face image acquisition module is used for driving the expression of the target face model based on the expression parameters to obtain a target face image;
and the first video sending module is used for generating a chat video according to the chat information and the target face image and sending the chat video to the first terminal type.
12. A social device applied to a server of a social system, the system further including a first terminal and a second terminal respectively communicating with the server, the device comprising:
the contact person marking determining module is used for determining whether the user corresponding to the second terminal is a marked contact person or not; when the user corresponding to the second terminal is not the marked contact, chat information sent by the second terminal is obtained, a default chat video is generated according to the chat information and the default face model, and the default chat video is output, wherein the default face model is a formal or serious face model;
the third information acquisition module is used for acquiring interaction demand information of a user corresponding to the first terminal and chat information sent by the second terminal when the user corresponding to the second terminal is a marked contact, wherein the interaction demand information comprises at least one of social information and online behavior characteristics;
the third target face model determining module is used for determining a target face model according to the interaction demand information;
a third emotion parameter acquisition module, configured to acquire an expression parameter of the target face model according to the chat information, where the expression parameter corresponds to the chat information in time;
the third target face image acquisition module is used for driving the expression of the target face model based on the expression parameters to obtain a target face image;
and the second video sending module is used for generating a chat video according to the chat information and the target face image and sending the chat video to the first terminal.
13. An electronic device, comprising:
a memory;
one or more processors coupled with the memory;
one or more programs, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-7, the method of claim 8, or the method of claim 9.
14. A computer-readable storage medium having stored thereon program code that can be invoked by a processor to perform the method of any of claims 1 to 7, the method of claim 8, or the method of claim 9.
CN201910838034.0A 2019-09-05 2019-09-05 Social contact method and device, electronic equipment and storage medium Active CN110674706B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910838034.0A CN110674706B (en) 2019-09-05 2019-09-05 Social contact method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910838034.0A CN110674706B (en) 2019-09-05 2019-09-05 Social contact method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110674706A CN110674706A (en) 2020-01-10
CN110674706B true CN110674706B (en) 2021-07-23

Family

ID=69076532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910838034.0A Active CN110674706B (en) 2019-09-05 2019-09-05 Social contact method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110674706B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368137A (en) * 2020-02-12 2020-07-03 百度在线网络技术(北京)有限公司 Video generation method and device, electronic equipment and readable storage medium
CN111294665B (en) * 2020-02-12 2021-07-20 百度在线网络技术(北京)有限公司 Video generation method and device, electronic equipment and readable storage medium
CN112188145A (en) * 2020-09-18 2021-01-05 随锐科技集团股份有限公司 Video conference method and system, and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101931621A (en) * 2010-06-07 2010-12-29 上海那里网络科技有限公司 Device and method for carrying out emotional communication in virtue of fictional character
CN105425953A (en) * 2015-11-02 2016-03-23 小天才科技有限公司 Man-machine interaction method and system
CN107146275A (en) * 2017-03-31 2017-09-08 北京奇艺世纪科技有限公司 A kind of method and device of setting virtual image
CN108234276A (en) * 2016-12-15 2018-06-29 腾讯科技(深圳)有限公司 Interactive method, terminal and system between a kind of virtual image
CN109550256A (en) * 2018-11-20 2019-04-02 咪咕互动娱乐有限公司 Virtual role method of adjustment, device and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104836977B (en) * 2014-02-10 2018-04-24 阿里巴巴集团控股有限公司 Video communication method and system during instant messaging
US20180342095A1 (en) * 2017-03-16 2018-11-29 Motional LLC System and method for generating virtual characters

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101931621A (en) * 2010-06-07 2010-12-29 上海那里网络科技有限公司 Device and method for carrying out emotional communication in virtue of fictional character
CN105425953A (en) * 2015-11-02 2016-03-23 小天才科技有限公司 Man-machine interaction method and system
CN108234276A (en) * 2016-12-15 2018-06-29 腾讯科技(深圳)有限公司 Interactive method, terminal and system between a kind of virtual image
CN107146275A (en) * 2017-03-31 2017-09-08 北京奇艺世纪科技有限公司 A kind of method and device of setting virtual image
CN109550256A (en) * 2018-11-20 2019-04-02 咪咕互动娱乐有限公司 Virtual role method of adjustment, device and storage medium

Also Published As

Publication number Publication date
CN110674706A (en) 2020-01-10

Similar Documents

Publication Publication Date Title
US9665563B2 (en) Animation system and methods for generating animation based on text-based data and user information
US11036469B2 (en) Parsing electronic conversations for presentation in an alternative interface
EP3095091B1 (en) Method and apparatus of processing expression information in instant communication
CN107977928B (en) Expression generation method and device, terminal and storage medium
EP3815042B1 (en) Image display with selective depiction of motion
CN110674706B (en) Social contact method and device, electronic equipment and storage medium
EP3889912B1 (en) Method and apparatus for generating video
CN112740709A (en) Gated model for video analysis
CN110868635B (en) Video processing method and device, electronic equipment and storage medium
CN110599359B (en) Social contact method, device, system, terminal equipment and storage medium
US9087131B1 (en) Auto-summarization for a multiuser communication session
CN110674398A (en) Virtual character interaction method and device, terminal equipment and storage medium
CN108846886B (en) AR expression generation method, client, terminal and storage medium
US20150031342A1 (en) System and method for adaptive selection of context-based communication responses
CN111538456A (en) Human-computer interaction method, device, terminal and storage medium based on virtual image
WO2019085625A1 (en) Emotion picture recommendation method and apparatus
WO2007034829A1 (en) Video creating device and video creating method
CN114402355A (en) Personalized automatic video cropping
CN114567693A (en) Video generation method and device and electronic equipment
CN112115231A (en) Data processing method and device
CN111182323A (en) Image processing method, device, client and medium
US11983807B2 (en) Automatically generating motions of an avatar
US20210192824A1 (en) Automatically generating motions of an avatar
WO2022001706A1 (en) A method and system providing user interactive sticker based video call
CN117808934A (en) Data processing method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Social networking method, device, electronic device and storage medium

Effective date of registration: 20211008

Granted publication date: 20210723

Pledgee: Shenzhen Branch of Guoren Property Insurance Co.,Ltd.

Pledgor: SHENZHEN ZHUIYI TECHNOLOGY Co.,Ltd.

Registration number: Y2021980010410

PC01 Cancellation of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20221031

Granted publication date: 20210723

Pledgee: Shenzhen Branch of Guoren Property Insurance Co.,Ltd.

Pledgor: SHENZHEN ZHUIYI TECHNOLOGY Co.,Ltd.

Registration number: Y2021980010410