CN109150690A - Interaction data processing method, device, computer equipment and storage medium - Google Patents

Interaction data processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN109150690A
CN109150690A CN201710458909.5A CN201710458909A CN109150690A CN 109150690 A CN109150690 A CN 109150690A CN 201710458909 A CN201710458909 A CN 201710458909A CN 109150690 A CN109150690 A CN 109150690A
Authority
CN
China
Prior art keywords
virtual session
scene
user
dimensional
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710458909.5A
Other languages
Chinese (zh)
Other versions
CN109150690B (en
Inventor
李斌
陈晓波
李磊
王俊山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710458909.5A priority Critical patent/CN109150690B/en
Publication of CN109150690A publication Critical patent/CN109150690A/en
Application granted granted Critical
Publication of CN109150690B publication Critical patent/CN109150690B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The present invention relates to a kind of interaction data processing method, device, computer equipment and storage mediums, which comprises corresponding virtual session scene is added by the first user identifier currently logged in;Acquire head image data;It identifies the expressive features in the head image data, obtains expression data;The expression data is sent to, terminal corresponding to the second user mark of the virtual session scene is added, make the terminal in the virtual session scene, controls virtual session member corresponding with first user identifier and trigger facial expressions and acts represented by the expression data.This method triggers the realization of facial expressions and acts represented by expression data by control virtual session member and interactively communicates, and for interacting communication based on user's authentic image, improves the personal secrets during interactively communicating to a certain extent.

Description

Interaction data processing method, device, computer equipment and storage medium
Technical field
The present invention relates to field of computer technology, set more particularly to a kind of interaction data processing method, device, computer Standby and storage medium.
Background technique
With the rapid development of science and technology, the various communication technologys are increasingly advanced, demand of the people to communication form It is more and more diversified.Current interactively communicates in mode, some movement looks of the video communication due to that can show communicating pair, The uninteresting and welcome by users not as voice communication or plain text communicate.
However, in current video communication, shown user images may be by malice screenshot or recording, and can It can be further propagated.And user images are to compare the information of privacy, if maliciously being recorded or being propagated, can seriously damage be used The privacy at family.Therefore, there are certain personal secrets in video communication at present.
Summary of the invention
Based on this, it is necessary to there are certain personal secrets for current video communication, provide a kind of interactive number According to processing method, device, computer equipment and storage medium.
A kind of interaction data processing method, which comprises
Corresponding virtual session scene is added by the first user identifier currently logged in;
Acquire head image data;
It identifies the expressive features in the head image data, obtains expression data;
The expression data is sent to, terminal corresponding to the second user mark of the virtual session scene is added, made The terminal controls described in virtual session member triggering corresponding with first user identifier in the virtual session scene Facial expressions and acts represented by expression data.
A kind of interaction data processing unit, which is characterized in that described device includes:
Module is added, corresponding virtual session scene is added for the first user identifier by currently logging in;
Image capture module, for acquiring head image data;
Expression Recognition module, the expressive features in the head image data, obtain expression data for identification;
Control module, for the expression data to be sent to the second user mark institute that the virtual session scene is added Corresponding terminal makes the terminal in the virtual session scene, controls virtual meeting corresponding with first user identifier Words member triggers facial expressions and acts represented by the expression data.
A kind of computer equipment, including memory and processor are stored with computer-readable instruction in the memory, institute When stating computer-readable instruction and being executed by the processor, so that the processor executes following steps:
Corresponding virtual session scene is added by the first user identifier currently logged in;
Acquire head image data;
It identifies the expressive features in the head image data, obtains expression data;
The expression data is sent to, terminal corresponding to the second user mark of the virtual session scene is added, made The terminal controls described in virtual session member triggering corresponding with first user identifier in the virtual session scene Facial expressions and acts represented by expression data.
A kind of storage medium being stored with computer-readable instruction, the computer-readable instruction are handled by one or more When device executes, so that one or more processors execute following steps:
Corresponding virtual session scene is added by the first user identifier currently logged in;
Acquire head image data;
It identifies the expressive features in the head image data, obtains expression data;
The expression data is sent to, terminal corresponding to the second user mark of the virtual session scene is added, made The terminal controls described in virtual session member triggering corresponding with first user identifier in the virtual session scene Facial expressions and acts represented by expression data.
Above-mentioned interaction data processing method, device, computer equipment and storage medium pass through the first user currently logged in Corresponding virtual session scene is added in mark, acquires and identifies that face data obtains expression data, and the expression data is sent The terminal corresponding to the second user mark that the virtual session scene is added.The terminal for receiving the expression data then virtually can It talks about in scene, controls virtual session member corresponding with the first user identifier and trigger facial expressions and acts represented by expression data.It is logical Facial expressions and acts realization represented by control virtual session member triggering expression data is crossed to interactively communicate, it is true compared to based on user For image interacts communication, the personal secrets during interactively communicating are improved to a certain extent.
A kind of interaction data processing method, which comprises
Corresponding virtual session scene is added by the second user mark currently logged in;
Receive the expression data that terminal corresponding with the first user identifier that the virtual session scene is added is sent;
Expressive features value is extracted from the expression data;
In the virtual session scene, control described in virtual session member triggering corresponding with first user identifier Facial expressions and acts represented by expressive features value.
A kind of interaction data processing unit, which is characterized in that described device includes:
Module is added, corresponding virtual session scene is added by the second user mark currently logged in;
Human facial feature extraction module receives terminal corresponding with the first user identifier that the virtual session scene is added and sends out The expression data sent;Expressive features value is extracted from the expression data;
Control module, in the virtual session scene, control corresponding with first user identifier virtual session at Member triggers facial expressions and acts represented by the expressive features value.
A kind of computer equipment, including memory and processor are stored with computer-readable instruction in the memory, institute When stating computer-readable instruction and being executed by the processor, so that the processor executes following steps:
Corresponding virtual session scene is added by the second user mark currently logged in;
Receive the expression data that terminal corresponding with the first user identifier that the virtual session scene is added is sent;
Expressive features value is extracted from the expression data;
In the virtual session scene, control described in virtual session member triggering corresponding with first user identifier Facial expressions and acts represented by expressive features value.
A kind of storage medium being stored with computer-readable instruction, the computer-readable instruction are handled by one or more When device executes, so that one or more processors execute following steps:
Corresponding virtual session scene is added by the second user mark currently logged in;
Receive the expression data that terminal corresponding with the first user identifier that the virtual session scene is added is sent;
Expressive features value is extracted from the expression data;
In the virtual session scene, control described in virtual session member triggering corresponding with first user identifier Facial expressions and acts represented by expressive features value.
Above-mentioned interaction data processing method, device, computer equipment and storage medium pass through the second user currently logged in Corresponding virtual session scene is added in mark, receives terminal corresponding with the first user identifier of virtual session scene is added and sends Expression data;Expressive features value is extracted from expression data;In virtual session scene, control corresponding with the first user identifier Virtual session member trigger expressive features value represented by facial expressions and acts.Expression data is triggered by control virtual session member Represented facial expressions and acts realization interactively communicates, for interacting communication based on user's authentic image, to a certain degree On improve personal secrets during interactively communicating.
Detailed description of the invention
Fig. 1 is the applied environment figure of interaction data processing method in one embodiment;
Fig. 2 is the schematic diagram of internal structure of computer equipment in one embodiment;
Fig. 3 is the flow diagram of interaction data processing method in one embodiment;
Fig. 4 A is the interface schematic diagram of virtual session scene in one embodiment;
Fig. 4 B is the interface schematic diagram of virtual session scene in another embodiment;
Fig. 5 is the timing diagram of interaction data processing method in one embodiment;
Fig. 6 is the architecture diagram of interaction data processing method in one embodiment;
Fig. 7 is the flow diagram of virtual session scene display step in one embodiment;
Fig. 8 is the flow diagram of visual angle effect operating procedure in one embodiment;
Fig. 9 is the flow diagram of interaction data processing method in another embodiment;
Figure 10 is the flow diagram of interaction data processing method in another embodiment;
Figure 11 is the flow diagram of virtual session scene display step in another embodiment;
Figure 12 is the structural block diagram of interaction data processing unit in one embodiment;
Figure 13 is the structural block diagram of interaction data processing unit in another embodiment;
Figure 14 is the structural block diagram of interaction data processing unit in another embodiment;
Figure 15 is the structural block diagram of interaction data processing unit in further embodiment;
Figure 16 is the structural block diagram of interaction data processing unit in a still further embodiment.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
Fig. 1 is the applied environment figure of interaction data processing method in one embodiment.Referring to Fig.1, which is handled The application environment of method includes first terminal 110, second terminal 120 and server 130.Wherein, first terminal 110 and the second end End 120 is the terminal for being mounted with to have the application program for realizing virtual session scenes function, first terminal 110 and second terminal 120 not only can be used for sending expression data, but also can be used for receiving expression data.Server 130 can be independent physics clothes Business device is also possible to the server cluster that multiple physical servers are constituted.Server 130 may include open service platform, also It may include the access server for accessing open service platform.First terminal 110 and second terminal 120 can be identical or not Same terminal.Terminal can be mobile terminal or desktop computer, and mobile terminal may include mobile phone, tablet computer, individual At least one of digital assistants and wearable device etc..
Corresponding virtual session scene can be added by the first user identifier currently logged in first terminal 110.First eventually End 110 can acquire head image data, identify the expressive features in head image data, obtain expression data.First terminal 110 can be sent to expression data server 130, which is forwarded to and virtual session scene is added by server 130 The corresponding second terminal 120 of second user mark.Second terminal 120 is in virtual session scene, control and the first user It identifies corresponding virtual session member and triggers facial expressions and acts represented by expression data.
It is appreciated that in other embodiments, first terminal 110 can be by point-to-point mode directly by expression data It is sent to second terminal 120, the forwarding without passing through server 130.
Fig. 2 is the schematic diagram of internal structure of computer equipment in one embodiment.The computer equipment can be in Fig. 1 First terminal 110 and second terminal 120.Referring to Fig. 2, which includes the processor connected by system bus, non- Volatile storage medium, built-in storage, network interface, display screen, input unit and image collecting device.Wherein, the computer The non-volatile memory medium of equipment can storage program area and computer-readable instruction, which is performed When, it may make processor to execute a kind of interaction data processing method.The processor of the computer equipment is calculated and is controlled for providing Ability processed supports the operation of entire computer equipment.Computer-readable instruction can be stored in the built-in storage, which can When reading instruction is executed by processor, processor may make to execute a kind of interaction data processing method.The network of computer equipment connects Mouth is for carrying out network communication, such as transmission expression data.The display screen of computer equipment can be liquid crystal display or electricity Sub- ink display screen, the input unit of computer equipment can be the touch layer covered on display screen, are also possible to computer and set Key, trace ball or the Trackpad being arranged on standby shell, can also be external keyboard, Trackpad or mouse etc..Touch layer and Display screen constitutes touch screen.Image collecting device can be camera.
It will be understood by those skilled in the art that structure shown in Figure 2, only part relevant to application scheme is tied The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
Fig. 3 is the flow diagram of interaction data processing method in one embodiment.The interaction data processing method can be with Applied to the first terminal 110 and/or second terminal 120 in Fig. 1.The present embodiment is mainly applied in above-mentioned Fig. 1 in this way First terminal 110 illustrate.Referring to Fig. 3, this method specifically comprises the following steps:
Corresponding virtual session scene is added by the first user identifier currently logged in S302.
Wherein, virtual session scene is the session context provided for virtual session member, the virtual session scene is added When member carries out image display, all it is shown with the image of virtual session member.
In one embodiment, virtual session scene can be virtual room.Virtual session scene can be three-dimensional Session context or two-dimensional virtual session context.Virtual session scene dialogue-based can create, and specifically, virtual session scene can It, can also (session members be only based on double session with based on multi-conference (session members be greater than or equal to 3 session) creation 2 sessions) creation.In one embodiment, virtual session scene can also include the background information shown, wherein show Background information may include background picture or three-dimensional background model, background picture can be two-dimension picture or tri-dimensional picture.Exhibition The background information shown can be real background information or virtual background information.
In one embodiment, virtual session scene can be real-time virtual session context.Wherein, real-time virtual session Scene refers to the virtual session scene for realizing real time communication.For example, wechat group is a multi-conference, in wechat group Real time phone call is created in group, the member that the real time phone call is added can be shown with virtual image, i.e. displaying virtual session Member realizes real time phone call, then constitutes above-mentioned virtual session scene.
Virtual session member is virtual image when member in the virtual session scene is shown.It is appreciated that void Quasi- image is the image fictionalized, is different from authentic image.Virtual session member includes virtual figure image.Virtual session at Member can also include the virtual image of animal, plant or other things.Virtual session member can be three-dimensional session members Or two-dimensional virtual session members.Virtual session member can be the virtual image (for example, virtual session initial model) of default, It can be customized by virtual session member's initial model combination user characteristics (such as user's face feature) and/or user The virtual image that attribute (such as clothes attribute) obtains.
The first user identifier currently logged in is current to log in first of the application program for realizing virtual session scene User identifier.For realizing the application program of virtual session scene, can be instant messaging application program, social application program or Game application etc..Terminal corresponding to the first user identifier currently logged in, can be described as " first terminal ".
In one embodiment, first terminal can request the first user identifier currently logged in phase is added to server In the members list for the virtual session scene answered, corresponding virtual meeting is added by the first user identifier currently logged in to realize Talk about scene.After virtual session scene is added, first terminal then can be added the virtual session scene other user identifiers It is communicated between corresponding terminal, for example transmission expression data is right to other user identifiers institute that the virtual session scene is added Answer terminal.It is appreciated that the user identifier of virtual session scene is added, the members list positioned at the virtual session scene can be In user identifier.
In one embodiment, first terminal can also in virtual session scene, by the virtual session scene at Member is shown with the virtual session member of virtual image.Wherein, it can wrap in the virtual session member that first terminal is shown Virtual session member corresponding to the first user identifier currently logged in by first terminal is included, can not also include currently passing through Virtual session member corresponding to the first user identifier that first terminal logs in.Not by virtual meeting corresponding to the first user identifier Members show is talked about in the virtual session scene that first terminal is shown, both will not influence corresponding to first terminal and other members Terminal between interactively communicate, can also save system calculation processing resource and display resource.
S304 acquires head image data.
Wherein, head image data are to carry out the image data that real time image collection obtains to head.Head head image data It may include face image data and headwork image data.Headwork, including head twist movement, for example bow, lift Head, to left handed twist or the to the right movement such as torsion.
Specifically, first terminal can be by calling the machine camera to acquire head image data.Wherein, the machine camera It can be the front camera or rear camera of the machine.It is appreciated that head image data collected can be to any It appears in the head in image acquisition region and carries out what Image Acquisition obtained, be not limited to use corresponding to the first user identifier Family.
In one embodiment, first terminal can detecte the number of members in the virtual session scene, work as virtual session When number of members in scene is not less than preset quantity threshold value, then executes step S304 and carry out head image data acquisition process. Wherein, preset quantity threshold value can be 2.It is appreciated that the number of members in virtual session scene mentioned here may include First user identifier currently logged in itself.
S306 identifies the expressive features in head image data, obtains expression data.
Expressive features are can to give expression to one's sentiment or the feature of mood, including facial expression feature and posture expressive features.Face Portion's expression is the expression expressed by face organ, for example chooses the facial expressions such as eyebrow or blink.Posture expression is to pass through limbs Expression of manual expression, such as rotary head etc. act expression.
In one embodiment, first terminal can parse head image data, identify the head image data In facial expression feature and/or headwork expressive features, obtain expression data.Expression data is can to indicate corresponding The data of facial expressions and acts.
Expression data may include a string of expressive features values of arranged in sequence.In one embodiment, each expressive features The corresponding position of value or sequence, characterize the expression type corresponding to it.For example, being located at the expression type of first position is " crying ", the expressive features value in the first position are then used to characterize the degree cried.
Expression data also may include expression type identification and corresponding expressive features value.Expression type is expression dynamic Make the classification of performance dimension, including opens one's mouth, blinks, laughing at, crying, rotary head or point head etc..It is appreciated that above-mentioned cited expression class Type is only used for illustrating, and is not used to limit the classification of expression, can be set according to actual needs the type of expression type.
Expression data is sent to and terminal corresponding to the second user mark of virtual session scene is added, makes end by S308 End controls virtual session member corresponding with the first user identifier and triggers table represented by expression data in virtual session scene Feelings movement.
Wherein, the second user that virtual session scene is added identifies, and in the members list for referring to the virtual session scene, removes All or part of user identifier other than first user identifier.In the present embodiment, second user mark can be one or more It is a.
The expression data can be sent to server by first terminal, and the expression data is forwarded to addition void by server The corresponding terminal of the second user mark of quasi- session context.First terminal can also be by directly by a manner of point-to-point, by this Expression data is sent directly to that terminal corresponding to the second user mark of virtual session scene is added, for example, working as virtual session When scene is that the double session establishment based on point-to-point mode obtains, first terminal directly can directly be sent out the expression data Send the terminal corresponding to the second user mark that virtual session scene is added.
In one embodiment, expression data is sent to and is added corresponding to the second user mark of virtual session scene Terminal makes terminal in virtual session scene, controls virtual session member corresponding with the first user identifier and triggers expression data Represented facial expressions and acts, comprising: expression data is sent to and is added corresponding to the second user mark of virtual session scene Terminal makes terminal extract expressive features value corresponding with the expression type recognized from expression data, in virtual session field It is dynamic to control expression represented by the expressive features value that virtual session member triggering corresponding with the first user identifier is extracted by Jing Zhong Make.
In one embodiment, the corresponding terminal of second user mark, the expressive features value pair that can be determined and extract The expression type answered according to expression control logic code corresponding to determining expression type and mentions in virtual session scene The expressive features value taken controls the corresponding facial expressions and acts of corresponding with the first user identifier virtual session member implementation.For example, table Facial expressions and acts represented by feelings data are " opening one's mouth 10 degree ", then control virtual session member corresponding with the first user identifier and implement The movement of " opening one's mouth 10 degree ".
In one embodiment, the corresponding terminal of second user mark, can also be according to expressive features value and corresponding Expression type generates corresponding texture information and is showed in texture information and the first user identifier in virtual session scene The expression of corresponding virtual session member shows position.For example, second uses when facial expressions and acts represented by expression data are " crying " Mark corresponding terminal in family then can generate " teardrop " texture information corresponding with " crying " according to expression data, by " the tear The texture information of pearl " is showed in below the eyes of virtual session member corresponding with the first user identifier.
Fig. 4 A is the interface schematic diagram of the virtual session scene in one embodiment.The current member of the virtual session scene Only 2, user a and user b.Assuming that Fig. 4 A is the interface of virtual session scene shown in terminal corresponding to user a, The true head image using the user of terminal corresponding to user a is shown in the image acquisition region in the upper left corner in Fig. 4 A, Virtual session member B corresponding with user b is shown in interface.
Fig. 4 B is the interface schematic diagram of the virtual session scene in another embodiment.The virtual session scene it is current at Member has multiple, it is assumed that Fig. 4 A is the interface of virtual session scene shown in terminal corresponding to user a, the upper left corner in Fig. 4 A Image acquisition region the true head image using the user of terminal corresponding to user a is shown, shown in interface Three virtual figure images are virtual session member, and virtual session member B is virtual session member corresponding with user b, empty Facial expressions and acts of " blinking right eye " represented by quasi- session members B triggering expression data.
Corresponding virtual session field is added by the first user identifier currently logged in above-mentioned interaction data processing method Scape acquires and identifies that face data obtains expression data, and the expression data is sent to and is added the of the virtual session scene Terminal corresponding to two user identifiers.The terminal of the expression data is received then in virtual session scene, control and the first user It identifies corresponding virtual session member and triggers facial expressions and acts represented by expression data.Table is triggered by control virtual session member The realization of facial expressions and acts represented by feelings data interactively communicates, for interacting communication based on user's authentic image, one Determine to improve the personal secrets during interactively communicating in degree.
In addition, control virtual session member triggers facial expressions and acts represented by expression data, it is to interactively communicating user Another manifestation mode of true expression, allows users to identify online user by facial expressions and acts, provides new mutual Flowing mode.
In one embodiment, step S302 includes: to obtain more people's meetings corresponding to the first user identifier for currently logging in Words mark;Multi-conference mark and the first user identifier are sent to server, server is added the first user identifier with more Conference identifies the members list of identified virtual session scene.
Wherein, multi-conference mark is used for unique identification multi-conference.Number of members in multi-conference is greater than or equal to 3.Multi-conference can be group or interim more people's chat sessions, can also be other types of multi-conference.
It is appreciated that the first user identifier currently logged in, is more people corresponding with corresponding multi-conference mark Member in session.Identified virtual session scene is identified with multi-conference, is the equal of right based on multi-conference mark institute The virtual session scene for the multi-conference creation answered.Identified virtual session scene is identified with multi-conference, can be with more Conference is identified as the virtual session scene directly identified, i.e. the unique identification of virtual session scene is exactly multi-conference mark Itself.Identified virtual session scene is identified with multi-conference, is also possible to identify using multi-conference as indirect identification Virtual session scene, the i.e. unique identification of virtual session scene are to identify unique corresponding virtual session scene mark with multi-conference Know, virtual session scene identity can be determined according to multi-conference mark, and then determine corresponding virtual session scene, therefore, Multi-conference mark can indirect unique identification virtual session scene.
Specifically, the application program that user can be logged in by the first user identifier for realizing virtual session scene, and After logining successfully, multi-conference interface is opened in first terminal, the multi-conference interface of the opening is and the first user Identify the interface of the corresponding multi-conference of corresponding multi-conference mark.User can send out in open multi-conference interface Act the operation that virtual session scene is added.It is right to obtain the first user identifier institute currently logged in response to the operation for first terminal The multi-conference mark answered, and multi-conference mark and the first user identifier are sent to server, server is by the first user The members list that identified virtual session scene is identified with the multi-conference is added in mark, to realize first user identifier Corresponding virtual session scene is added.
In one embodiment, server can believe the access that identified virtual session scene is identified with multi-conference Breath is back to first terminal, and virtual session scene can be added according to the access information in first terminal.Wherein, access information includes Access IP address and port.
In above-described embodiment, it is equivalent to and is added by the first user identifier that will currently log in based on corresponding more people's meetings Talk about creation virtual session scene, and then realize by virtual session scene virtual session member trigger expression data institute it is right The facial expressions and acts answered interactively communicate mode, are equivalent to and are improved on multi-conference, and it is mutual to propose a kind of new interaction Flowing mode.
Fig. 5 is the timing diagram that the interaction data processing method is realized in one embodiment, specifically includes the following steps:
1) first terminal opens session, sends multi-conference corresponding with the first user identifier and identifies to server, with Shen Virtual session scene please be added.
2) server creation identifies identified virtual session scene with multi-conference, and distributes for the virtual session scene Corresponding access information.
3) server returns to the access information of distribution to first terminal.
4) first terminal establishes data channel with server according to the access information, the virtual session scene is added.
5) difference of terminal corresponding to other members in the multi-conference in the manner described above, establishes data with server Virtual session scene is added in channel.
6) number of members being added in the virtual session scene is notified to be added each in virtual session scene by server Terminal corresponding to member.
7) when number of members of each terminal in current virtual session context is greater than or equal to 2, starts local image and adopt Collect equipment, acquires and identify that head image data obtain expression data.
8) expression data is sent to server by each terminal.
9) expression data is forwarded to terminal corresponding to other members of the virtual session scene by server.
10) terminal of expression data is received in virtual session scene, control and virtual meeting corresponding to the expression data Member is talked about, facial expressions and acts represented by the expression data are triggered.
In one embodiment, multi-conference mark and the first user identifier are sent to server, makes server by first The members list that identified virtual session scene is identified with multi-conference is added in user identifier, comprising: sends multi-conference mark Know and the first user identifier is to server, server is made to identify identified virtual session scene with multi-conference existing When, the first user identifier is added to the members list that identified virtual session scene is identified with multi-conference;Alternatively, sending more Conference mark and the first user identifier make server that the virtual meeting identified with multi-conference be not present to server When talking about scene, creation identifies identified virtual session scene with multi-conference, and the first user identifier is added to the void of creation The members list of quasi- session context.
Specifically, first terminal sends multi-conference mark and the first user identifier to server, and request server is by the One user identifier, which is added, identifies identified virtual session scene with multi-conference.Server, which can detecte, whether there is and more people The corresponding virtual session scene of session identification.
When the existing corresponding virtual session scene of mark with multi-conference, then server can mark first user Know the members list for being added and identifying identified virtual session scene with multi-conference, with realize by the first user identifier be added with Multi-conference identifies identified virtual session scene.
When there is no with multi-conference mark corresponding virtual session scene, then server can be according to the multi-conference Mark one new virtual session scene of creation, and it is used to the unique identification new wound using multi-conference mark as directly mark The virtual session scene built, or generate one with the multi-conference mark uniquely and with newly created virtual session scene it is unique Corresponding virtual session scene identity identifies using the multi-conference as indirect identification the virtual meeting for carrying out the unique identification creation Scene is talked about, further, which can be added the members list of the virtual session scene of creation by server.
In one embodiment, virtual session scene is real-time virtual session context.First terminal can be by multi-conference Mark and the first user identifier are sent to the real-time signalling service program in server by real-time signalling path, to real-time signaling Real-time virtual session context is added in service routine request.After real-time signalling service program receives request, whether detection has currently been deposited In real-time virtual session context corresponding with multi-conference mark, when there is no real-time virtual meetings corresponding with multi-conference mark When talking about scene, then real-time signalling service program can be identified according to the multi-conference creation one it is new institute is identified with multi-conference The real-time virtual session context of mark, and believe to Real-time Data Service program application access corresponding with the virtual session scene Breath, and the access information is back to first terminal.When existing real-time virtual session context corresponding with multi-conference mark When, then real-time signalling service program can directly return to access information corresponding with the real-time virtual session context to the first end End.Wherein, access information includes access IP address and port.
First terminal begins setting up the data channel between Real-time Data Service program, in real time according to the access information The members list of the virtual session scene of creation is added in first user identifier by signalling service program.Second user mark institute is right Real-time virtual session context, and the data between Real-time Data Service program can also be added in the terminal answered through the above way Channel, virtual session scene is added.It can be based on being established between terminal corresponding to first terminal and second user mark Data channel, carry out the transmitting-receiving of expression data.Fig. 6 is the framework that the interaction data processing method is realized in one embodiment Figure.
In above-described embodiment, the creation of virtual session scene and addition processing are integrated, user is only with passing through the Corresponding multi-conference mark is sent to server to request to be added virtual session scene by one terminal, no matter in server when It is preceding to whether there is virtual session scene corresponding with multi-conference mark, it can all be added and void is identified with the multi-conference Quasi- session context, section eliminate the operating procedure of individually creation virtual session scene.In addition, being directed to all users, i.e., for the first time Create virtual session scene user and virtual session scene creation after apply be added user, be all using same set of rule, Redundancy caused by more set rules is avoided, the applicability of logic rules is improved.
In one embodiment, step S306 includes: the expressive features identified in head image data, obtains expression type With corresponding expressive features value;The expression number for expressive features value corresponding with the expression type that identification obtains that generation includes According to.
Wherein, expression type is classification of the expression in movement displaying dimension, including opens one's mouth, blinks, laughing at, crying, rotary head or point First-class type.First terminal identifies that the obtained expression type of expressive features in head image data is at least one.Expression Characteristic value, for characterizing the size of facial expressions and acts amplitude and/or degree corresponding to expression type.For example, expression type " crying ", Corresponding expressive features value is different, then the degree cried is different, for example can be divided into and sob or wail etc. in various degree.Compare again Such as, expression type " left-hand rotation head ", expressive features value can be the angle of rotary head, and rotary head angle is bigger, then the amplitude of rotary head is bigger.
In one embodiment, the expression number for expressive features value corresponding with the expression type that identification obtains that generation includes According to, comprising: expressive features value corresponding with obtained expression type is identified is combined, expression data is obtained.
Specifically, first terminal can directly according to the obtained expression type of identification and corresponding expressive features value into Row combination, obtains expression data.First terminal can also add expressive features value corresponding with the obtained expression type of identification Position corresponding with corresponding expression type is added to, to generate corresponding expression data.It is appreciated that second user identifies institute Corresponding terminal can determine expression type corresponding to the expression data according to the position corresponding to expression data.For example, table Feelings type " opening one's mouth " corresponds to the 1st position, then expressive features value " 10 degree " corresponding with " opening one's mouth " is added to the 1st position, table Feelings type " left-hand rotation head " corresponds to the 2nd position, then expressive features value " 15 degree " corresponding with " left-hand rotation head " is added to the 2nd It sets, and so on, expressive features value is combined, to generate corresponding expression data.
It is appreciated that expressive features value included in expression data generated can be only and knowledge in the embodiment The corresponding expressive features value of the expression type not obtained.For example, only recognizing expression type " left-hand rotation head " and " opening one's mouth ", then should Expressive features value included in expression data, just expressive features value only corresponding with " left-hand rotation head " and " opening one's mouth ".
In another embodiment, the expression type identified belongs to default expression set of types and closes.Generation includes The expression data of expressive features value corresponding with the expression type that identification obtains, comprising: in default expression set of types conjunction The unidentified expression type arrived assigns the expressive features value for indicating not trigger corresponding facial expressions and acts;According to default expression set of types The preset order of each expression type in conjunction combines the corresponding expressive features value of each expression type, constitutes expression data.
Expression type set has been preset in the present embodiment, in first terminal.It is default to identify that obtained expression type belongs to this Expression type set.It identifies obtained expression type, can be one or more.
Wherein, it indicates the expressive features value for not triggering corresponding facial expressions and acts, destination virtual session members can be made not trigger Corresponding facial expressions and acts.
Specifically, for the unidentified expression type arrived in default expression set of types conjunction, first terminal can assign expression The expressive features value of corresponding facial expressions and acts is not triggered, and suitable according to presetting for each expression type in the conjunction of default expression set of types Sequence combines the corresponding expressive features value of each expression type, constitutes expression data.It is appreciated that the expression number of this composition According to can only include expressive features value, and can not have to including expression type or expression type identification these data.It is used second Mark corresponding terminal control virtual session member corresponding with the first user identifier in family triggers table represented by expression data When feelings act, the expression type corresponding to it can be determined, and then touch according to the sequence of expressive features value each in expression data Send out facial expressions and acts represented by the expressive features value.Expression data can be made very small, ensure that the transmitting effect of expression data Rate and Transfer Quality.
In one embodiment, this method further include: obtain the expression number that the corresponding terminal of second user mark is sent According to;In virtual session scene, controls and identify the expression data institute that corresponding virtual session member triggers acquisition with second user The facial expressions and acts of expression.
Specifically, the corresponding terminal of second user mark, can also acquire head image data, and from the head of acquisition Expressive features are identified in image data, obtain corresponding expression data, and expression data is sent to first terminal.First terminal The expression data that the corresponding terminal of second user mark is sent can be directly received by point-to-point mode, also can receive Server forwarding identify corresponding terminal with second user transmitted by expression data.
In one embodiment, first terminal can control corresponding with second user mark in virtual session scene Virtual session member implements facial expressions and acts represented by the expression data.For example, facial expressions and acts represented by expression data are " to open 10 degree of mouth " then controls the movement that virtual session member corresponding with second user mark implements " opening one's mouth 10 degree ".
In one embodiment, first terminal can also generate corresponding texture information according to expression data, virtual In session context, which is showed in the expression for identifying corresponding virtual session member with second user and shows position. For example, first terminal can then generate and " crying " phase according to expression data when facial expressions and acts represented by expression data are " crying " Corresponding " teardrop " texture information, by the texture information of " teardrop " be showed in virtual session corresponding with second user mark at Below the eyes of member.
In one embodiment, first terminal can extract expressive features value from expression data, in virtual session scene In, control identifies corresponding virtual session member with second user and triggers facial expressions and acts represented by the expressive features value.
Specifically, first terminal can determine expression type corresponding with the expressive features value extracted, in virtual session field Jing Zhong, according to the expressive features value of expression control logic code corresponding to determining expression type and extraction, control and first The corresponding virtual session member of user identifier implements corresponding facial expressions and acts.First terminal can also be according to expressive features value and right The expression type answered generates corresponding texture information and is showed in texture information and the first user in virtual session scene The expression for identifying corresponding virtual session member shows position.
It is appreciated that when including multiple identifications in the expression data that the corresponding terminal of same second user mark is sent When the corresponding expressive features value of the expression type arrived, first terminal can control virtual meeting corresponding with second user mark Words member triggers facial expressions and acts represented by multiple expressive features values simultaneously.For example, include in expression data with " left-hand rotation head " and " opening one's mouth " corresponding expressive features value, then can control virtual session member while the expression for triggering " left-hand rotation head " and " opening one's mouth " is dynamic Make.
In above-described embodiment, control virtual session member triggers facial expressions and acts represented by expression data, is logical to interaction Another manifestation mode of the true expression of credit household, allows users to identify online user by facial expressions and acts, provide New interaction mode.
In one embodiment, by the first user identifier for currently logging in be added corresponding virtual session scene it Before, this method further include: obtain user's face image data corresponding to the first user identifier currently logged in;According to user face Portion's image data and virtual session member's initial model generate virtual session member corresponding with the first user identifier.
Virtual session member's initial model is the virtual image model of default.
Specifically, first terminal can carry out real-time face figure to user corresponding to the first user identifier currently logged in As acquisition, user's face image data is obtained.First terminal is also corresponding to available the first user identifier currently logged in The picture (for example, photo) of user extracts face image data from the picture and obtains the user's face image data.Wherein, User corresponding to first user identifier is the user of the first user identifier unique identification.
User's face image data can be sent to server by first terminal, and server is according to user's face image data With virtual session member's initial model, virtual session member corresponding with the first user identifier is generated.It can also be in first terminal Virtual session member initial model is set, according to user's face image data and virtual session member's initial model, is generated and the The corresponding virtual session member of one user identifier.
In one embodiment, it according to user's face image data and virtual session member's initial model, generates and first The corresponding virtual session member of user identifier, comprising: parsing user's face image data generates corresponding face texture information, will The face texture information superposition of generation obtains virtual session corresponding with the first user identifier in virtual session member's initial model Member.
It is appreciated that the virtual session member that other users mark in virtual session scene is corresponding, be also possible to by According to the above method, obtained according to user image data corresponding to each user identifier and virtual session member's initial model.
In above-described embodiment, according to user image data corresponding to each user identifier and virtual session member's introductory die The corresponding virtual session member that type obtains, so that virtual session member more clearly characterizes corresponding member, convenient for handing over Identification during mutual communication, and then be conducive to improve efficiency and validity that user interactively communicates.
In one embodiment, this method further include: obtain virtual session member corresponding with second user mark;It determines Distributing position of the virtual session member in virtual session scene;Obtain background picture corresponding with virtual session scene;It will obtain The virtual session member taken is distributed in corresponding distributing position, and is overlapped display with background picture, constitutes virtual session Scene.
Wherein, background picture is the picture as display background.Background picture can be two-dimensional background picture, can also be with It is three-dimensional background picture.Background picture can be virtual background picture, be also possible to real background picture.Virtual background picture, It is the picture for showing virtual scene, for example, shown in caricature is exactly a kind of virtual scene.Real background picture is display The picture of real scene, for example, to the photo that real scene is taken pictures, with regard to the real scene of display.
Specifically, first terminal can obtain virtual session member corresponding with second user mark from server, that is, obtain Take avatar data corresponding with second user mark.Virtual session member can be three-dimensional session members or two dimension Virtual session member.
The quantity that first terminal can be identified according to the second user in the members list of virtual session scene, determination are used for The size of the geometric figure of distributing virtual session members is chosen in the geometric figure and meets the position of the quantity, with determine with Second user identifies distributing position of the corresponding virtual session member in virtual session scene.Wherein it is possible in geometric figure In randomly select the position for meeting the quantity, can also be chosen according to predeterminated position selection rule and meet the position of quantity.
For example, the quantity of second user mark is 5, then the geometric graph of distributing virtual session members is determined according to quantity 5 The size of shape, then chooses 5 positions in the geometric figure, and each position is virtual meeting corresponding with second user mark Talk about distributing position of the member in virtual session scene.
In one embodiment, first terminal can also be marked according to all users in the members list of virtual session scene The quantity of knowledge determines the size of the geometric figure for distributing virtual session members, chooses in the geometric figure and meets the number The position of amount, and determine and identify distributing position of the corresponding virtual session member in virtual session scene with second user.
What first terminal can will acquire identifies corresponding virtual session member in corresponding distribution position with second user It sets and is distributed, and be superimposed with acquired background picture, to constitute virtual session scene, and export and show the virtual session Scene.
Virtual session member and background picture are overlapped display, constitute shown virtual session by above-described embodiment Scene enriches virtual session scene, so that interactively communicating middle shown virtual session scene more close to actual life In session context, realize the diversity for the mode of interactively communicating.
In one embodiment, virtual session member is three-dimensional session members, and virtual session scene is three-dimensional Session context.As shown in fig. 7, this method further includes virtual session scene display step, specifically includes the following steps:
S702 obtains three-dimensional session members corresponding with second user mark.
Specifically, first terminal can obtain three-dimensional session members corresponding with second user mark from server, Obtain three-dimensional avatars data corresponding with second user mark.
S704 determines distributing position of the three-dimensional session members in three-dimensional session context.
The quantity that first terminal can be identified according to the second user in the members list of three-dimensional session context, determines For being distributed the size of the geometric figure of three-dimensional session members, the position for meeting the quantity is chosen in the geometric figure, Distributing position of the corresponding three-dimensional session members in three-dimensional session context is identified with second user with determination.Its In, the position for meeting the quantity can be randomly selected in geometric figure, can also be chosen according to predeterminated position selection rule full The position of sufficient quantity.
For example, the quantity of second user mark is 5, then the several of distribution three-dimensional session members are determined according to quantity 5 The size of what figure, then chooses 5 positions in the geometric figure, and each position is and second user mark corresponding three Tie up distributing position of the virtual session member in three-dimensional session context.
In one embodiment, first terminal can also be useful according to the institute in the members list of three-dimensional session context The quantity of family mark, determines the size for being distributed the geometric figure of three-dimensional session members, chooses in the geometric figure Meet the position of the quantity, and determines three-dimensional session members corresponding with second user mark in three-dimensional session context In distributing position.
S706 obtains three-dimensional background model corresponding with three-dimensional session context.
Wherein, three-dimensional background model is the threedimensional model as display background.Three-dimensional background model can be three-dimensional Background model is also possible to three-dimensional real background model.Three-dimensional background model is the model for showing three-dimensional virtual scene. Three-dimensional real background model is the model for showing three-dimensional real scene.
Three-dimensional session members are distributed in corresponding distributing position, and carry out group with three-dimensional background model by S708 Display is closed, three-dimensional session context is constituted.
What first terminal can will acquire identifies corresponding three-dimensional session members in corresponding point with second user Cloth is distributed on position, and is combined with three-dimensional background model, and to constitute three-dimensional session context, and exporting display should Three-dimensional session context.
Three-dimensional session members and three-dimensional background model are combined display by above-described embodiment, what composition was shown Three-dimensional session context, so that the three-dimensional session context shown in interactively communicating is further more close to existing The session context to grow directly from seeds in living, realizes the diversity for the mode of interactively communicating.
As shown in figure 8, in one embodiment, this method further includes visual angle effect operating procedure, following step is specifically included It is rapid:
S802, detection act on the touch control operation on three-dimensional session context, obtain touch trajectory.
Specifically, user can be by carrying out touch-control behaviour to the three-dimensional session context shown on first terminal interface Make, obtains touch trajectory.Wherein, touch control operation includes the operation for pressing and pulling.
Touch trajectory is mapped as the motion track of the point of observation in three-dimensional session context by S804.
It is point according to the observation it is appreciated that the three-dimensional session context finally shown on first terminal display screen, it will be by Three-dimensional session context that three-dimensional background model and three-dimensional session members are constituted projects to be shown on display screen. Point of observation is different, and the three-dimensional session context projected on display screen is also different.
Specifically, touch trajectory can be mapped as by first terminal according to the mapping relations between touch point and point of observation The motion track of point of observation in three-dimensional session context.
S806, according to motion track, the position of the point of observation after determining movement.
Specifically, motion track of the first terminal according to the point of observation determined, the position of the point of observation after determining movement It sets.
S808 throws three-dimensional background model and three-dimensional session members according to the position of the point of observation after movement Shadow is shown.
Specifically, first terminal according to the position of the point of observation after movement as new point of observation, by three-dimensional background model It projects on first terminal display screen and is shown again with three-dimensional session members, then on first terminal display screen again It is different from three-dimensional background model image shown before touch control operation to project obtained three-dimensional background model image, in first terminal Shown angle is not yet before the angles of display and touch control operation of the three-dimensional session members projected again on display screen Identical, then three-dimensional background model and three-dimensional session members after projecting again constitute the three-dimensional meeting under new visual angle Talk about scene.
Above-described embodiment realizes the visual angle effect processing to three-dimensional session context, can be adjusted to user and be thought The observation visual angle wanted, so that the display of three-dimensional session context is more flexible during interactively communicating, can make display Three-dimensional session context more meets the demand of user, improves the validity of shown three-dimensional session context.
As shown in figure 9, in one embodiment, providing another interaction data processing method, this method is specifically included Following steps:
S902 obtains user's face image data corresponding to the first user identifier currently logged in.
S904 is generated and the first user identifier pair according to user's face image data and virtual session member's initial model The three-dimensional session members answered.
S906 obtains the mark of multi-conference corresponding to the first user identifier currently logged in, sends multi-conference mark With the first user identifier to server, make server that the first user identifier to be added to the virtual meeting identified with multi-conference Talk about scene group's list.
S908 acquires head image data, identifies the expressive features in head image data, obtains expression type and correspondence Expressive features value.
S910, for the unidentified expression type arrived in default expression set of types conjunction, assigning indicates not triggering corresponding expression The expressive features value of movement.
S912 is respectively right by each expression type according to the preset order of each expression type in the conjunction of default expression set of types The expressive features value combination answered, constitutes expression data.
Expression data is sent to and terminal corresponding to the second user mark of virtual session scene is added, makes end by S914 Expressive features value corresponding with the expression type recognized is extracted at end from expression data, in virtual session scene, control Facial expressions and acts represented by the expressive features value that three-dimensional session members triggering corresponding with the first user identifier is extracted.
S916 obtains three-dimensional session members corresponding with second user mark, determines that three-dimensional session members exist Distributing position in three-dimensional session context.
S918 obtains three-dimensional background model corresponding with three-dimensional session context, three-dimensional session members is distributed It is combined display in corresponding distributing position, and with three-dimensional background model, constitutes three-dimensional session context.
S920, detection act on the touch control operation on three-dimensional session context, obtain touch trajectory.
Touch trajectory is mapped as the motion track of the point of observation in three-dimensional session context, according to moving rail by S922 Mark, the position of the point of observation after determining movement.
S924 throws three-dimensional background model and three-dimensional session members according to the position of the point of observation after movement Shadow is shown.
S926 obtains the expression data that the corresponding terminal of second user mark is sent.
S928 is controlled in virtual session scene and is identified what corresponding virtual session member triggering obtained with second user Facial expressions and acts represented by expression data.
Above-mentioned interaction data processing method triggers facial expressions and acts represented by expression data by control virtual session member Realization interactively communicates, and for interacting communication based on user's authentic image, improves interactively communicate to a certain extent Personal secrets in the process.And control virtual session member triggers facial expressions and acts represented by expression data, is logical to interaction Another manifestation mode of the true expression of credit household, allows users to identify online user by facial expressions and acts, provide New interaction mode.
Secondly, the virtual meeting created based on corresponding multi-conference is added by the first user identifier that will currently log in Scene is talked about, and then realizes and facial expressions and acts corresponding to expression data is triggered as the virtual session member in virtual session scene Mode is interactively communicated, is equivalent to and is improved on multi-conference, proposes a kind of new interaction interaction mode.
Then, for the unidentified expression type arrived in default expression set of types conjunction, first terminal can assign expression not Trigger the expressive features value of corresponding facial expressions and acts, and according to default expression set of types close in each expression type preset order, By the corresponding expressive features value combination of each expression type, expression data is constituted.Expression data can be made very small, guaranteed The transmission efficiency and Transfer Quality of expression data.
Furthermore it is obtained according to user image data corresponding to each user identifier and virtual session member's initial model Corresponding virtual session member, so that virtual session member more clearly characterizes corresponding member, convenient for interactively communicating Identification in journey, and then be conducive to improve efficiency and validity that user interactively communicates.
Finally, realizing the visual angle effect processing to three-dimensional session context, the sight that user wants can be adjusted to Visual angle is examined, so that the display of three-dimensional session context is more flexible during interactively communicating, the three-dimensional of display can be made empty Quasi- session context more meets the demand of user, improves the validity of shown three-dimensional session context.
As shown in Figure 10, in one embodiment, another interaction data processing method is provided, interaction data processing Method can be applied to first terminal 110 and/or second terminal 120 in Fig. 1.The present embodiment is mainly applied to upper in this way The second terminal 120 in Fig. 1 is stated to illustrate.This method comprises:
Corresponding virtual session scene is added by the second user mark currently logged in S1002.
Wherein, virtual session scene is the session context provided for virtual session member, the virtual session scene is added When member carries out image display, all it is shown with the image of virtual session member.In one embodiment, virtual session scene It can be virtual room.Virtual session scene can be three-dimensional session context or two-dimensional virtual session context.Virtual session Scene dialogue-based can create, and specifically, virtual session scene can (session members be greater than or equal to 3 based on multi-conference Session) creation, can also be based on double session (session members be only 2 session) creation.In one embodiment, virtually Session context can also include the background information shown, wherein the background information of displaying may include background picture or three-dimensional back Scape model, background picture can be two-dimension picture or tri-dimensional picture.
In one embodiment, virtual session scene can be real-time virtual session context.Wherein, real-time virtual session Scene refers to the virtual session scene for realizing real time communication.For example, wechat group is a multi-conference, in wechat group Real time phone call is created in group, the member that the real time phone call is added can be shown with virtual image, i.e. displaying virtual session Member realizes real time phone call, then constitutes above-mentioned real-time virtual session context.
Virtual session member is virtual image when member in the virtual session scene is shown.It is appreciated that void Quasi- image is the image fictionalized, is different from authentic image.Virtual session member includes virtual figure image.Virtual session at Member can also include the virtual image of animal, plant or other things.Virtual session member can be three-dimensional session members Or two-dimensional virtual session members.Virtual session member can be the virtual image (for example, virtual session initial model) of default, It can be customized by virtual session member's initial model combination user characteristics (such as user's face feature) and/or user The virtual image that attribute (such as clothes attribute) obtains.
The second user mark currently logged in is current to log in second of the application program for realizing virtual session scene User identifier.For realizing the application program of virtual session scene, can be instant messaging application program, social application program or Game application etc..The corresponding terminal of the second user mark currently logged in, can be described as " second terminal ".
In one embodiment, second terminal can request the second user that will currently log in mark that phase is added to server In the members list for the virtual session scene answered, corresponding virtual meeting is added by the second user mark currently logged in to realize Talk about scene.After virtual session scene is added, second terminal then can be added the virtual session scene other user identifiers It is communicated between corresponding terminal, for example transmission expression data is right to other user identifiers institute that the virtual session scene is added Answer terminal.It is appreciated that the user identifier of virtual session scene is added, the members list positioned at the virtual session scene can be In user identifier.
In one embodiment, second terminal can also in virtual session scene, by the virtual session scene at Member is shown with the virtual session member of virtual image.Wherein, it can wrap in the virtual session member that second terminal is shown The corresponding virtual session member of the second user mark currently logged in by second terminal is included, can not also include currently passing through The corresponding virtual session member of the second user mark that second terminal logs in.Second user mark is not corresponding virtual meeting Members show is talked about in the virtual session scene that second terminal is shown, both will not influence corresponding to second terminal and other members Terminal between interactively communicate, can also save system calculation processing resource and display resource.
S1004 receives the expression data that terminal corresponding with the first user identifier of virtual session scene is added is sent.
Expression data is the data that can indicate corresponding facial expressions and acts.
Expression data may include a string of expressive features values of arranged in sequence.In one embodiment, each expressive features The corresponding position of value or sequence, characterize the expression type corresponding to it.For example, being located at the expression type of first position is " crying ", the expressive features value in the first position are then used to characterize the degree cried.
Expression data also may include expression type identification and corresponding expressive features value.Expression type is expression dynamic Make the classification of performance dimension, including opens one's mouth, blinks, laughing at, crying, rotary head or point head etc..It is appreciated that above-mentioned cited expression class Type is only used for illustrating, and is not used to limit the classification of expression, can be set according to actual needs the type of expression type.
Second terminal can receive the corresponding terminal of the first user identifier of the addition virtual session scene of server forwarding The expression data of transmission can also directly receive the second user mark that virtual session scene is added by point-to-point mode The expression data that corresponding terminal is sent.
S1006 extracts expressive features value from expression data.
S1008 controls virtual session member corresponding with the first user identifier and triggers expression spy in virtual session scene Facial expressions and acts represented by value indicative.
In one embodiment, second terminal can be in virtual session scene, according to corresponding to determining expression type Expression control logic code and extraction expressive features value, control corresponding with the first user identifier virtual session member implementation Corresponding facial expressions and acts.For example, facial expressions and acts represented by expression data are " opening one's mouth 10 degree ", then control and the first user identifier Corresponding virtual session member implements the movement of " opening one's mouth 10 degree ".
In another embodiment, second terminal can also generate phase according to expressive features value and corresponding expression type Texture information is showed in virtual session corresponding with the first user identifier in virtual session scene by corresponding texture information The expression of member shows position.For example, when facial expressions and acts represented by expression data are " crying ", corresponding to second user mark Terminal then can generate " teardrop " texture information corresponding with " crying " according to expression data, by the texture information exhibition of " teardrop " It is shown in below the eyes of virtual session member corresponding with the first user identifier.
Above-mentioned interaction data processing method, device and computer equipment are added by the second user mark currently logged in Corresponding virtual session scene receives the expression number that terminal corresponding with the first user identifier of virtual session scene is added is sent According to;Expressive features value is extracted from expression data;In virtual session scene, virtual meeting corresponding with the first user identifier is controlled It talks about member and triggers facial expressions and acts represented by expressive features value.It is triggered represented by expression data by control virtual session member Facial expressions and acts realization interactively communicates, and for interacting communication based on user's authentic image, improves to a certain extent Personal secrets during interactively communicating.
In addition, control virtual session member triggers facial expressions and acts represented by expression data, it is to interactively communicating user Another manifestation mode of true expression, allows users to identify online user by facial expressions and acts, provides new mutual Flowing mode.
In one embodiment, step S1002 includes: to obtain the corresponding more people's meetings of the second user mark currently logged in Words mark;It sends multi-conference mark and second user is identified to server, server is added second user mark with more Conference identifies the members list of identified virtual session scene.
Wherein, multi-conference mark is used for unique identification multi-conference.Number of members in multi-conference is greater than or equal to 3.Multi-conference can be group or interim more people's chat sessions, can also be other types of multi-conference.
It is appreciated that the second user mark currently logged in, is more people corresponding with corresponding multi-conference mark Member in session.Identified virtual session scene is identified with multi-conference, can be using multi-conference mark as directly The virtual session scene of mark, the i.e. unique identification of virtual session scene are exactly multi-conference mark itself.With multi-conference mark Know identified virtual session scene, is also possible to identify the virtual session scene as indirect identification using ` multi-conference, i.e., it is empty The unique identification of quasi- session context is to identify unique corresponding virtual session scene identity with multi-conference, according to multi-conference mark Knowledge can determine virtual session scene identity, and then determine corresponding virtual session scene, therefore, multi-conference mark can between Connect unique identification virtual session scene.
Specifically, user can identify the application program logged in for realizing virtual session scene by second user, and After logining successfully, multi-conference interface is opened in second terminal, the multi-conference interface of the opening is and second user Identify the interface of the corresponding multi-conference of corresponding multi-conference mark.User can send out in open multi-conference interface The operation that virtual session scene is added is acted, it is right to obtain the second user mark institute currently logged in response to the operation for second terminal The multi-conference mark answered, and multi-conference mark and second user mark are sent to server, server is by second user The members list that identified virtual session scene is identified with the multi-conference is added in mark, is identified the second user with realizing Corresponding virtual session scene is added.
In one embodiment, server can believe the access that identified virtual session scene is identified with multi-conference Breath is back to first terminal, and virtual session scene can be added according to the access information in first terminal.Wherein, access information includes Access IP address and port.
In above-described embodiment, it is equivalent to and is added by the first user identifier that will currently log in based on corresponding more people's meetings The virtual session scene of creation is talked about, and then is realized, expression data institute is triggered by the virtual session member in virtual session scene Corresponding facial expressions and acts interactively communicate mode, are equivalent to and are improved on multi-conference, propose a kind of new interaction Interaction mode.
In one embodiment, it sends multi-conference mark and second user is identified to server, make server by second The members list that identified virtual session scene is identified with multi-conference is added in user identifier, comprising: sends multi-conference mark Know and second user is identified to server, server is made to identify identified virtual session scene with multi-conference existing When, second user is identified to the members list for being added and identifying identified virtual session scene with multi-conference;Alternatively, sending more Conference mark and second user are identified to server, make server that the virtual meeting identified with multi-conference be not present When talking about scene, creation identifies identified virtual session scene with multi-conference, and second user is identified to the void that creation is added The members list of quasi- session context.
Specifically, second terminal sends multi-conference mark and second user is identified to server, and request server is by the Two user identifiers, which are added, identifies identified virtual session scene with multi-conference.Server, which can detecte, whether there is and more people The corresponding virtual session scene of session identification.
When the existing corresponding virtual session scene of mark with multi-conference, then server can be by the second user mark Know the members list for being added and identifying identified virtual session scene with multi-conference, with realize second user is identified be added with Multi-conference identifies identified virtual session scene.
When there is no with multi-conference mark corresponding virtual session scene, then server can be according to the multi-conference Mark one new virtual session scene of creation, and it is used to the unique identification new wound using multi-conference mark as directly mark The virtual session scene built, or generate one with the multi-conference mark uniquely and with newly created virtual session scene it is unique Corresponding virtual session scene identity identifies using the multi-conference as indirect identification the virtual meeting for carrying out the unique identification creation Scene is talked about, further, which can be identified the members list that the virtual session scene of creation is added by server.
In above-described embodiment, the creation of virtual session scene and addition processing are integrated, user is only with passing through the Corresponding multi-conference mark is sent to server to request to be added virtual session scene by one terminal, no matter in server when It is preceding to whether there is virtual session scene corresponding with multi-conference mark, it can all be added and void is identified with the multi-conference Quasi- session context, section eliminate the operating procedure of individually creation virtual session scene.In addition, being directed to all users, i.e., for the first time Create virtual session scene user and virtual session scene creation after apply be added user, be all using same set of rule, Redundancy caused by more set rules is avoided, the applicability of logic rules is improved.
In one embodiment, this method further include: obtain the user identifier institute in the members list of virtual session scene Corresponding virtual session member;Determine distributing position of the virtual session member in virtual session scene;Acquisition and virtual session The corresponding background picture of scene;The virtual session member that will acquire is distributed in corresponding distributing position, and and background picture into Row Overlapping display constitutes virtual session scene.
Wherein, background picture is the picture as display background.Background picture can be two-dimensional background picture, can also be with It is three-dimensional background picture.Background picture can be virtual background picture, be also possible to real background picture.Virtual background picture, It is the picture for showing virtual scene, for example, shown in caricature is exactly a kind of virtual scene.Real background picture is display The picture of real scene, for example, to the photo that real scene is taken pictures, with regard to the real scene of display.
Specifically, second terminal can be right from the user identifier institute in the members list that server obtains virtual session scene The virtual session member answered, i.e. virtual image number corresponding to user identifier in the members list of acquisition virtual session scene According to.Virtual session member can be three-dimensional session members or two-dimensional virtual session members.
Second terminal can be determined according to the quantity of the user identifier in the members list of virtual session scene for being distributed The size of the geometric figure of virtual session member chooses the position for meeting the quantity in the geometric figure, to determine virtual meeting Talk about distributing position of the virtual session member in virtual session scene corresponding to the user identifier in the members list of scene.Its In, the position for meeting the quantity can be randomly selected in geometric figure, can also be chosen according to predeterminated position selection rule full The position of sufficient quantity.
For example, the quantity of the user identifier in the members list of virtual session scene is 5, is then determined and divided according to quantity 5 The size of the geometric figure of cloth virtual session member, then chooses 5 positions in the geometric figure, and each position is as virtual Distributing position of the virtual session member in virtual session scene corresponding to user identifier in the members list of session context.
In one embodiment, second terminal can also according in the members list of virtual session scene remove with this second The quantity of user identifier after user identifier determines the size of the geometric figure for distributing virtual session members, in the geometry The position for meeting the quantity is chosen in figure, and is removed in the members list of determining virtual session scene and identified with the second user Distributing position of the virtual session member in virtual session scene corresponding to user identifier afterwards.
The virtual session member that second terminal can will acquire is distributed on corresponding distributing position, and is superimposed with Acquired background picture to constitute virtual session scene, and exports and shows the virtual session scene.
Virtual session member and background picture are overlapped display, constitute shown virtual session by above-described embodiment Scene enriches virtual session scene, so that it is more raw close to reality to interactively communicate middle shown virtual session scene Session context in work realizes the diversity for the mode of interactively communicating.
In one embodiment, virtual session member is three-dimensional session members, and virtual session scene is three-dimensional Session context.As shown in figure 11, this method further includes virtual session scene display step, specifically includes the following steps:
S1102 obtains three-dimensional session members corresponding to the user identifier in the members list of virtual session scene.
Specifically, second terminal can be right from the user identifier institute in the members list that server obtains virtual session scene The three-dimensional session members answered, i.e. three-dimensional corresponding to user identifier in the members list of acquisition virtual session scene Vivid data.
S1104 determines distributing position of the three-dimensional session members in three-dimensional session context.
Second terminal can be according to the quantity of the user identifier in the members list of three-dimensional session context, and determination is used for It is distributed the size of the geometric figure of three-dimensional session members, chooses the position for meeting the quantity, in the geometric figure with true Three-dimensional session members corresponding to the user identifier in the members list of virtual session scene are determined in three-dimensional session field Distributing position in scape.Wherein it is possible to the position for meeting the quantity is randomly selected in geometric figure, it can also be according to default position It sets selection rule and chooses the position for meeting quantity.
For example, the quantity of the user identifier in the members list of virtual session scene is 5, is then determined and divided according to quantity 5 Then the size of the geometric figure of cloth three-dimensional session members chooses 5 positions in the geometric figure, each position is Three-dimensional session members are in three-dimensional session context corresponding to user identifier in the members list of virtual session scene In distributing position.
In one embodiment, second terminal can also remove and this according in the members list of three-dimensional session context The quantity of all user identifiers after second user mark, determines the ruler for being distributed the geometric figure of three-dimensional session members It is very little, the position for meeting the quantity is chosen in the geometric figure, and remove and be somebody's turn to do in the members list of determining virtual session scene Distribution position of the three-dimensional session members in three-dimensional session context corresponding to user identifier after second user mark It sets.
S1106 obtains three-dimensional background model corresponding with three-dimensional session context.
Wherein, three-dimensional background model is the threedimensional model as display background.Three-dimensional background model can be three-dimensional Background model is also possible to three-dimensional real background model.Three-dimensional background model is the model for showing three-dimensional virtual scene. Three-dimensional real background model is the model for showing three-dimensional real scene.
Three-dimensional session members are distributed in corresponding distributing position, and carry out group with three-dimensional background model by S1108 Display is closed, three-dimensional session context is constituted.
The three-dimensional session members that second terminal can will acquire are distributed on corresponding distributing position, and with Three-dimensional background model is combined, and to constitute three-dimensional session context, and is exported and is shown the three-dimensional session context.
Three-dimensional session members and three-dimensional background model are combined display by above-described embodiment, what composition was shown Three-dimensional session context, so that the three-dimensional session context shown in interactively communicating is further more close to existing The session context to grow directly from seeds in living, realizes the diversity for the mode of interactively communicating.
In one embodiment, this method further includes observation visual angle variation operation, specifically includes the following steps: detection acts on Touch control operation on three-dimensional session context, obtains touch trajectory;Touch trajectory is mapped as three-dimensional session context In point of observation motion track;According to motion track, the position of the point of observation after determining movement;According to the point of observation after movement Position, three-dimensional background model and three-dimensional session members are subjected to Projection Display.
Specifically, user can be by carrying out touch-control behaviour to the three-dimensional session context shown on second terminal interface Make, obtains touch trajectory.Wherein, touch control operation includes the operation for pressing and pulling.
It is point according to the observation it is appreciated that the three-dimensional session context finally shown on second terminal display screen, it will be by Three-dimensional session context that three-dimensional background model and three-dimensional session members are constituted projects to be shown on display screen. Point of observation is different, and the three-dimensional session context projected on display screen is also different.
Touch trajectory can be mapped as three-dimensional according to the mapping relations between touch point and point of observation by second terminal The motion track of point of observation in session context.Second terminal is determined to move according to the motion track for the point of observation determined The position of point of observation afterwards.
Second terminal according to the position of the point of observation after movement as new point of observation, three-dimensional background model and three-dimensional is empty Quasi- session members are projected on second terminal display screen again and are shown, then projection obtains again on second terminal display screen Three-dimensional background model image it is different from three-dimensional background model image shown before touch control operation, on second terminal display screen Again shown angle is not also identical before the angles of display and touch control operation of the three-dimensional session members projected, then weighs Three-dimensional background model and three-dimensional session members after new projection constitute the three-dimensional session context under new visual angle.
Above-described embodiment realizes the visual angle effect processing to three-dimensional session context, can be adjusted to user and be thought The observation visual angle wanted, so that the display of three-dimensional session context is more flexible during interactively communicating, can make display Three-dimensional session context more meets the demand of user, improves the validity of shown three-dimensional session context.
As shown in figure 12, in one embodiment, a kind of interaction data processing unit 1200 is provided, which includes: Module 1202, image capture module 1204, Expression Recognition module 1206 and control module 1208 is added, in which:
Module 1202 is added, corresponding virtual session scene is added for the first user identifier by currently logging in.
Image capture module 1204, for acquiring head image data.
Expression Recognition module 1206, the expressive features in head image data, obtain expression data for identification.
Control module 1208 is added corresponding to the second user mark of virtual session scene for expression data to be sent to Terminal, make terminal in virtual session scene, control corresponding with the first user identifier virtual session member triggering expression number According to represented facial expressions and acts.
In one embodiment, addition module 1202 is also used to obtain more corresponding to the first user identifier currently logged in Conference mark;Multi-conference mark and the first user identifier are sent to server, server is added by the first user identifier The members list of identified virtual session scene is identified with multi-conference.
In one embodiment, module 1202 is added and is also used to send multi-conference mark and the first user identifier to service Device, make server it is existing identified virtual session scene is identified with multi-conference when, by the first user identifier be added with Multi-conference identifies the members list of identified virtual session scene;Alternatively, sending multi-conference mark and the first user mark Know makes server when there is no identified virtual session scene is identified with multi-conference to server, creates with more people's meetings Words identify identified virtual session scene, and the first user identifier is added to the members list of the virtual session scene created.
In one embodiment, Expression Recognition module 1206 is also used to identify the expressive features in head image data, obtains To expression type and corresponding expressive features value;Expressive features value corresponding with the expression type that identification obtains that generation includes Expression data.
In one embodiment, Expression Recognition module 1206 be also used to for preset expression set of types close in unidentified arrive Expression type assigns the expressive features value for indicating not trigger corresponding facial expressions and acts;According to each table in the conjunction of default expression set of types The preset order of feelings type combines the corresponding expressive features value of each expression type, constitutes expression data.
In one embodiment, control module 1208, which is also used to for expression data being sent to, is added the of virtual session scene Terminal corresponding to two user identifiers makes terminal extract expression corresponding with the expression type recognized from expression data special Value indicative controls virtual session member corresponding with the first user identifier and triggers the expressive features extracted in virtual session scene The represented facial expressions and acts of value.
In one embodiment, control module 1208 is also used to obtain the table that the corresponding terminal of second user mark is sent Feelings data;In virtual session scene, controls and identify the expression number that corresponding virtual session member triggers acquisition with second user According to represented facial expressions and acts.
As shown in figure 13, in one embodiment, the device 1200 further include:
Virtual session member generation module 1201, for obtaining user face corresponding to the first user identifier currently logged in Portion's image data;According to user's face image data and virtual session member's initial model, generate corresponding with the first user identifier Virtual session member.
In one embodiment, the device 1200 further include:
Virtual session scene display module (not shown), for obtaining virtual session corresponding with second user mark Member;Determine distributing position of the virtual session member in virtual session scene;Obtain background corresponding with virtual session scene Picture;The virtual session member that will acquire is distributed in corresponding distributing position, and is overlapped display with background picture, constitutes Virtual session scene.
As shown in figure 14, in one embodiment, virtual session member is three-dimensional session members, virtual session scene For three-dimensional session context.The device 1200 further include:
Three-dimensional session context display module 1210, for obtaining three-dimensional session corresponding with second user mark Member;Determine distributing position of the three-dimensional session members in three-dimensional session context;It obtains and three-dimensional session field The corresponding three-dimensional background model of scape;Three-dimensional session members are distributed in corresponding distributing position, and with three-dimensional background mould Type is combined display, constitutes three-dimensional session context.
As shown in figure 15, in one embodiment, the device 1200 further include:
Visual angle adjusts module 1212 and obtains touch-control for detecting the touch control operation acted on three-dimensional session context Track;Touch trajectory is mapped as to the motion track of the point of observation in three-dimensional session context;According to motion track, determines and move The position of point of observation after dynamic;According to the position of the point of observation after movement, by three-dimensional background model and three-dimensional session members Carry out Projection Display.
As shown in figure 16, in one embodiment, a kind of interaction data processing unit 1600 is provided, which includes adding Enter module 1602, human facial feature extraction module 1604 and control module 1606, in which:
Module 1602 is added, corresponding virtual session scene is added by the second user mark currently logged in;
Human facial feature extraction module 1604 receives terminal corresponding with the first user identifier of virtual session scene is added and sends out The expression data sent;Expressive features value is extracted from expression data;
Control module 1608 controls virtual session member touching corresponding with the first user identifier in virtual session scene Deliver facial expressions and acts represented by feelings characteristic value.
In one embodiment, control module 1608 is also used to determine expression class corresponding with the expressive features value of extraction Type;It is special according to expression control logic code corresponding to determining expression type and the expression of extraction in virtual session scene Value indicative controls the corresponding facial expressions and acts of corresponding with the first user identifier virtual session member implementation;And/or according to expression spy Value indicative and corresponding expression type generate corresponding texture information, in virtual session scene, by texture information be showed in The expression of the corresponding virtual session member of first user identifier shows position.
In one embodiment, addition module 1602 is also used to obtain more corresponding to the second user mark currently logged in Conference mark;It sends multi-conference mark and second user is identified to server, identify server by second user and be added The members list of identified virtual session scene is identified with multi-conference.
In one embodiment, addition module 1602 is also used to send multi-conference mark and second user is identified to service Device, make server it is existing identified virtual session scene is identified with multi-conference when, second user is identified be added with Multi-conference identifies the members list of identified virtual session scene;Alternatively, sending multi-conference mark and second user mark Know makes server when there is no identified virtual session scene is identified with multi-conference to server, creates with more people's meetings Words identify identified virtual session scene, and second user is identified to the members list that the virtual session scene of creation is added.
In one embodiment, device 1600 further include:
Virtual session scene display module (not shown), the use in members list for obtaining virtual session scene The corresponding virtual session member of family mark;Determine distributing position of the virtual session member in virtual session scene;Obtain with The corresponding background picture of virtual session scene;The virtual session member that will acquire is distributed in corresponding distributing position, and with back Scape picture is overlapped display, constitutes virtual session scene.
In one embodiment, virtual session member is three-dimensional session members, and virtual session scene is three-dimensional Session context.Device 1600 further include:
Three-dimensional session context display module (not shown), in the members list for obtaining virtual session scene User identifier corresponding to three-dimensional session members;Determine three-dimensional session members in three-dimensional session context Distributing position;Obtain three-dimensional background model corresponding with three-dimensional session context;Three-dimensional session members are distributed in institute Corresponding distributing position, and it is combined display with three-dimensional background model, constitute three-dimensional session context.
In one embodiment, device 1600 further include:
Visual angle adjusts module and obtains touch trajectory for detecting the touch control operation acted on three-dimensional session context; Touch trajectory is mapped as to the motion track of the point of observation in three-dimensional session context;According to motion track, after determining movement Point of observation position;According to the position of the point of observation after movement, three-dimensional background model and three-dimensional session members are carried out Projection Display.
In one embodiment, interaction data processing unit provided by the present application can be implemented as a kind of computer program Form, the computer program can be run in computer equipment as shown in Figure 2, and the non-volatile of the computer equipment is deposited Storage media can store each program module for forming the interaction data processing unit, for example, addition module 1202 shown in Figure 12, Image capture module 1204, Expression Recognition module 1206 and control module 1208.It include computer-readable in each program module Instruction, the computer-readable instruction is for making the computer equipment execute each implementation of the application described in this specification Step in the interaction data processing method of example, for example, the computer equipment can pass through interaction data as shown in figure 12 Addition module 1202 in processing unit 1200 passes through the first user identifier currently logged in and corresponding virtual session scene is added, Head graph data is acquired by image capture module 1204, is identified in head image data by Expression Recognition module 1206 Expressive features obtain expression data.And expression data is sent to by control module 1208 and is added the of virtual session scene Terminal corresponding to two user identifiers makes terminal in virtual session scene, controls virtual meeting corresponding with the first user identifier It talks about member and triggers facial expressions and acts represented by expression data.
In one embodiment, a kind of computer equipment, including memory and processor are provided, is deposited in the memory Contain computer-readable instruction, when the computer-readable instruction is executed by the processor so that the processor execute with Lower step: corresponding virtual session scene is added by the first user identifier currently logged in;Acquire head image data;Identification Expressive features in the head image data, obtain expression data;The expression data is sent to, the virtual meeting is added The corresponding terminal of the second user mark of scene is talked about, makes the terminal in the virtual session scene, control and described the The corresponding virtual session member of one user identifier triggers facial expressions and acts represented by the expression data.
In one embodiment, it is added performed by processor by the first user identifier currently logged in corresponding virtual Session context, comprising: obtain the mark of multi-conference corresponding to the first user identifier currently logged in;Send the multi-conference The server is added first user identifier with more people's meetings to server in mark and first user identifier Words identify the members list of identified virtual session scene.
In one embodiment, the transmission multi-conference mark and first user mark performed by processor Know makes the server that first user identifier is added to the virtual meeting identified with the multi-conference to server Talk about the members list of scene, comprising: send the multi-conference mark and first user identifier to server, make the clothes Be engaged in device it is existing identified virtual session scene is identified with the multi-conference when, will first user identifier addition with The multi-conference identifies the members list of identified virtual session scene;Alternatively, sending the multi-conference mark and institute The first user identifier is stated to server, makes the server that the virtual session identified with multi-conference mark be not present When scene, creation identifies identified virtual session scene with the multi-conference, and first user identifier is added and is created The members list for the virtual session scene built.
In one embodiment, the expressive features in the identification head image data performed by processor, obtain To expression data, comprising: identify the expressive features in the head image data, obtain expression type and corresponding expression is special Value indicative;The expression data for expressive features value corresponding with the expression type that identification obtains that generation includes.
In one embodiment, the expression type identified belongs to default expression set of types and closes;Processor is held The expression data for expressive features value corresponding with the expression type that identification obtains that the capable generation includes, comprising: right The unidentified expression type arrived in the conjunction of default expression set of types, assigns the expressive features for indicating not trigger corresponding facial expressions and acts Value;According to the preset order of each expression type in the default expression set of types conjunction, each expression type is respectively corresponded to Expressive features value combination, constitute expression data.
In one embodiment, the virtual session is added in described be sent to the expression data performed by processor The corresponding terminal of the second user mark of scene, makes the terminal in the virtual session scene, control and described first The corresponding virtual session member of user identifier triggers facial expressions and acts represented by the expression data, comprising: by the expression number According to the corresponding terminal of the second user that the virtual session scene is added mark is sent to, make the terminal from the expression number Extract corresponding with the expression type recognized expressive features value according to middle, in the virtual session scene, control with it is described The corresponding virtual session member of first user identifier triggers facial expressions and acts represented by the expressive features value extracted.
In one embodiment, computer-readable instruction also makes processor execute following steps: obtaining described second and uses The expression data that mark corresponding terminal in family is sent;In the virtual session scene, control is identified with the second user Corresponding virtual session member triggers facial expressions and acts represented by the expression data obtained.
In one embodiment, corresponding virtual session scene is added in first user identifier by currently logging in Before, computer-readable instruction also makes processor execute following steps: obtaining corresponding to the first user identifier currently logged in User's face image data;According to the user's face image data and virtual session member's initial model, generate with it is described The corresponding virtual session member of first user identifier.
In one embodiment, computer-readable instruction also makes processor execute following steps: obtaining and described second The corresponding virtual session member of user identifier;Determine distribution position of the virtual session member in the virtual session scene It sets;Obtain background picture corresponding with the virtual session scene;The virtual session member that will acquire is distributed in corresponding Distributing position, and be overlapped display with the background picture, constitute the virtual session scene.
In one embodiment, the virtual session member is three-dimensional session members, and the virtual session scene is Three-dimensional session context;Computer-readable instruction also makes processor execute following steps: obtaining and the second user mark Know corresponding three-dimensional session members;Determine point of the three-dimensional session members in the three-dimensional session context Cloth position;Obtain three-dimensional background model corresponding with the three-dimensional session context;By the three-dimensional session members point It is distributed in corresponding distributing position, and is combined display with the three-dimensional background model, constitutes three-dimensional session field Scape.
In one embodiment, computer-readable instruction also makes processor execute following steps: detection acts on described Touch control operation on three-dimensional session context, obtains touch trajectory;The touch trajectory is mapped as the three-dimensional meeting Talk about the motion track of the point of observation in scene;According to the motion track, the position of the point of observation after determining movement;According to described The three-dimensional background model and the three-dimensional session members are carried out Projection Display by the position of the point of observation after movement.
In one embodiment, a kind of computer equipment, including memory and processor are provided, is deposited in the memory Contain computer-readable instruction, when the computer-readable instruction is executed by the processor so that the processor execute with Lower step: corresponding virtual session scene is added by the second user mark currently logged in;It receives and the virtual meeting is added Talk about the expression data that the corresponding terminal of the first user identifier of scene is sent;Expressive features value is extracted from the expression data; In the virtual session scene, controls virtual session member corresponding with first user identifier and trigger the expressive features The represented facial expressions and acts of value.
In one embodiment, performed by processor in the virtual session scene, control and first user It identifies corresponding virtual session member and triggers facial expressions and acts represented by the expressive features value, comprising: determine the institute with extraction State the corresponding expression type of expressive features value;In the virtual session scene, according to table corresponding to determining expression type The expressive features value of feelings control logic code and extraction controls virtual session member corresponding with first user identifier Implement corresponding facial expressions and acts;And/or corresponding line is generated according to the expressive features value and the corresponding expression type Information is managed, in the virtual session scene, the texture information is showed in corresponding with first user identifier virtual The expression of session members shows position.
In one embodiment, the second user mark by currently logging in performed by processor is added accordingly Virtual session scene, comprising: obtain the corresponding multi-conference mark of the second user mark currently logged in;Send more people Session identification and the second user are identified to server, and the server is added second user mark with described more Conference identifies the members list of identified virtual session scene.
In one embodiment, the transmission multi-conference mark and the second user mark performed by processor Know makes the server that the virtual meeting identified with the multi-conference is added in second user mark to server Talk about the members list of scene, comprising: send the multi-conference mark and the second user is identified to server, make the clothes Be engaged in device it is existing identified virtual session scene is identified with the multi-conference when, the second user is identified be added with The multi-conference identifies the members list of identified virtual session scene;Alternatively, sending the multi-conference mark and institute It states second user to identify to server, makes the server that the virtual session identified with multi-conference mark be not present When scene, creation identifies identified virtual session scene with the multi-conference, and the second user is identified, wound is added The members list for the virtual session scene built.
In one embodiment, computer-readable instruction also makes processor execute following steps: obtaining the virtual meeting Talk about virtual session member corresponding to the user identifier in the members list of scene;Determine the virtual session member in the void Distributing position in quasi- session context;Obtain background picture corresponding with the virtual session scene;What be will acquire is described virtual Session members are distributed in corresponding distributing position, and are overlapped display with the background picture, constitute the virtual session Scene.
In one embodiment, the virtual session member is three-dimensional session members, and the virtual session scene is Three-dimensional session context.Computer-readable instruction also makes processor execute following steps: obtaining the virtual session scene Members list in user identifier corresponding to three-dimensional session members;Determine the three-dimensional session members described Distributing position in three-dimensional session context;Obtain three-dimensional background model corresponding with the three-dimensional session context;It will The three-dimensional session members are distributed in corresponding distributing position, and are combined display with the three-dimensional background model, Constitute the three-dimensional session context.
In one embodiment, computer-readable instruction also makes processor execute following steps: detection acts on described Touch control operation on three-dimensional session context, obtains touch trajectory;The touch trajectory is mapped as the three-dimensional meeting Talk about the motion track of the point of observation in scene;According to the motion track, the position of the point of observation after determining movement;According to described The three-dimensional background model and the three-dimensional session members are carried out Projection Display by the position of the point of observation after movement.
In one embodiment, a kind of non-volatile readable storage medium for being stored with computer-readable instruction is provided, When the computer-readable instruction is executed by one or more processors, so that one or more processors execute following steps: logical Corresponding virtual session scene is added after the first user identifier currently logged in;Acquire head image data;Identify the head Expressive features in image data, obtain expression data;The expression data is sent to, the virtual session scene is added The corresponding terminal of second user mark, makes the terminal in the virtual session scene, control is marked with first user Know corresponding virtual session member and triggers facial expressions and acts represented by the expression data.
In one embodiment, it is added performed by processor by the first user identifier currently logged in corresponding virtual Session context, comprising: obtain the mark of multi-conference corresponding to the first user identifier currently logged in;Send the multi-conference The server is added first user identifier with more people's meetings to server in mark and first user identifier Words identify the members list of identified virtual session scene.
In one embodiment, the transmission multi-conference mark and first user mark performed by processor Know makes the server that first user identifier is added to the virtual meeting identified with the multi-conference to server Talk about the members list of scene, comprising: send the multi-conference mark and first user identifier to server, make the clothes Be engaged in device it is existing identified virtual session scene is identified with the multi-conference when, will first user identifier addition with The multi-conference identifies the members list of identified virtual session scene;Alternatively, sending the multi-conference mark and institute The first user identifier is stated to server, makes the server that the virtual session identified with multi-conference mark be not present When scene, creation identifies identified virtual session scene with the multi-conference, and first user identifier is added and is created The members list for the virtual session scene built.
In one embodiment, the expressive features in the identification head image data performed by processor, obtain To expression data, comprising: identify the expressive features in the head image data, obtain expression type and corresponding expression is special Value indicative;The expression data for expressive features value corresponding with the expression type that identification obtains that generation includes.
In one embodiment, the expression type identified belongs to default expression set of types and closes;Processor is held The expression data for expressive features value corresponding with the expression type that identification obtains that the capable generation includes, comprising: right The unidentified expression type arrived in the conjunction of default expression set of types, assigns the expressive features for indicating not trigger corresponding facial expressions and acts Value;According to the preset order of each expression type in the default expression set of types conjunction, each expression type is respectively corresponded to Expressive features value combination, constitute expression data.
In one embodiment, the virtual session is added in described be sent to the expression data performed by processor The corresponding terminal of the second user mark of scene, makes the terminal in the virtual session scene, control and described first The corresponding virtual session member of user identifier triggers facial expressions and acts represented by the expression data, comprising: by the expression number According to the corresponding terminal of the second user that the virtual session scene is added mark is sent to, make the terminal from the expression number Extract corresponding with the expression type recognized expressive features value according to middle, in the virtual session scene, control with it is described The corresponding virtual session member of first user identifier triggers facial expressions and acts represented by the expressive features value extracted.
In one embodiment, computer-readable instruction also makes processor execute following steps: obtaining described second and uses The expression data that mark corresponding terminal in family is sent;In the virtual session scene, control is identified with the second user Corresponding virtual session member triggers facial expressions and acts represented by the expression data obtained.
In one embodiment, corresponding virtual session scene is added in first user identifier by currently logging in Before, computer-readable instruction also makes processor execute following steps: obtaining corresponding to the first user identifier currently logged in User's face image data;According to the user's face image data and virtual session member's initial model, generate with it is described The corresponding virtual session member of first user identifier.
In one embodiment, computer-readable instruction also makes processor execute following steps: obtaining and described second The corresponding virtual session member of user identifier;Determine distribution position of the virtual session member in the virtual session scene It sets;Obtain background picture corresponding with the virtual session scene;The virtual session member that will acquire is distributed in corresponding Distributing position, and be overlapped display with the background picture, constitute the virtual session scene.
In one embodiment, the virtual session member is three-dimensional session members, and the virtual session scene is Three-dimensional session context;Computer-readable instruction also makes processor execute following steps: obtaining and the second user mark Know corresponding three-dimensional session members;Determine point of the three-dimensional session members in the three-dimensional session context Cloth position;Obtain three-dimensional background model corresponding with the three-dimensional session context;By the three-dimensional session members point It is distributed in corresponding distributing position, and is combined display with the three-dimensional background model, constitutes three-dimensional session field Scape.
In one embodiment, computer-readable instruction also makes processor execute following steps: detection acts on described Touch control operation on three-dimensional session context, obtains touch trajectory;The touch trajectory is mapped as the three-dimensional meeting Talk about the motion track of the point of observation in scene;According to the motion track, the position of the point of observation after determining movement;According to described The three-dimensional background model and the three-dimensional session members are carried out Projection Display by the position of the point of observation after movement.
In one embodiment, a kind of non-volatile readable storage medium for being stored with computer-readable instruction is provided, When the computer-readable instruction is executed by one or more processors, so that one or more processors execute following steps: logical Corresponding virtual session scene is added after the second user mark currently logged in;It receives and is added the of the virtual session scene The expression data that the corresponding terminal of one user identifier is sent;Expressive features value is extracted from the expression data;Described virtual In session context, controls virtual session member corresponding with first user identifier and trigger represented by the expressive features value Facial expressions and acts.
In one embodiment, performed by processor in the virtual session scene, control and first user It identifies corresponding virtual session member and triggers facial expressions and acts represented by the expressive features value, comprising: determine the institute with extraction State the corresponding expression type of expressive features value;In the virtual session scene, according to table corresponding to determining expression type The expressive features value of feelings control logic code and extraction controls virtual session member corresponding with first user identifier Implement corresponding facial expressions and acts;And/or corresponding line is generated according to the expressive features value and the corresponding expression type Information is managed, in the virtual session scene, the texture information is showed in corresponding with first user identifier virtual The expression of session members shows position.
In one embodiment, the second user mark by currently logging in performed by processor is added accordingly Virtual session scene, comprising: obtain the corresponding multi-conference mark of the second user mark currently logged in;Send more people Session identification and the second user are identified to server, and the server is added second user mark with described more Conference identifies the members list of identified virtual session scene.
In one embodiment, the transmission multi-conference mark and the second user mark performed by processor Know makes the server that the virtual meeting identified with the multi-conference is added in second user mark to server Talk about the members list of scene, comprising: send the multi-conference mark and the second user is identified to server, make the clothes Be engaged in device it is existing identified virtual session scene is identified with the multi-conference when, the second user is identified be added with The multi-conference identifies the members list of identified virtual session scene;Alternatively, sending the multi-conference mark and institute It states second user to identify to server, makes the server that the virtual session identified with multi-conference mark be not present When scene, creation identifies identified virtual session scene with the multi-conference, and the second user is identified, wound is added The members list for the virtual session scene built.
In one embodiment, computer-readable instruction also makes processor execute following steps: obtaining the virtual meeting Talk about virtual session member corresponding to the user identifier in the members list of scene;Determine the virtual session member in the void Distributing position in quasi- session context;Obtain background picture corresponding with the virtual session scene;What be will acquire is described virtual Session members are distributed in corresponding distributing position, and are overlapped display with the background picture, constitute the virtual session Scene.
In one embodiment, the virtual session member is three-dimensional session members, and the virtual session scene is Three-dimensional session context.Computer-readable instruction also makes processor execute following steps: obtaining the virtual session scene Members list in user identifier corresponding to three-dimensional session members;Determine the three-dimensional session members described Distributing position in three-dimensional session context;Obtain three-dimensional background model corresponding with the three-dimensional session context;It will The three-dimensional session members are distributed in corresponding distributing position, and are combined display with the three-dimensional background model, Constitute the three-dimensional session context.
In one embodiment, computer-readable instruction also makes processor execute following steps: detection acts on described Touch control operation on three-dimensional session context, obtains touch trajectory;The touch trajectory is mapped as the three-dimensional meeting Talk about the motion track of the point of observation in scene;According to the motion track, the position of the point of observation after determining movement;According to described The three-dimensional background model and the three-dimensional session members are carried out Projection Display by the position of the point of observation after movement.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, which can be stored in a computer-readable storage and be situated between In matter, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, storage medium above-mentioned can be The non-volatile memory mediums such as magnetic disk, CD, read-only memory (Read-Only Memory, ROM) or random storage note Recall body (Random Access Memory, RAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance Shield all should be considered as described in this specification.
Only several embodiments of the present invention are expressed for above embodiments, and the description thereof is more specific and detailed, but can not Therefore it is construed as limiting the scope of the patent.It should be pointed out that for those of ordinary skill in the art, Under the premise of not departing from present inventive concept, various modifications and improvements can be made, and these are all within the scope of protection of the present invention. Therefore, the scope of protection of the patent of the invention shall be subject to the appended claims.

Claims (20)

1. a kind of interaction data processing method, which comprises
Corresponding virtual session scene is added by the first user identifier currently logged in;
Acquire head image data;
It identifies the expressive features in the head image data, obtains expression data;
The expression data is sent to, terminal corresponding to the second user mark of the virtual session scene is added, made described Terminal controls virtual session member corresponding with first user identifier and triggers the expression in the virtual session scene Facial expressions and acts represented by data.
2. the method according to claim 1, wherein phase is added in first user identifier by currently logging in The virtual session scene answered, comprising:
Obtain the mark of multi-conference corresponding to the first user identifier currently logged in;
The multi-conference mark and first user identifier are sent to server, makes the server by first user The members list that identified virtual session scene is identified with the multi-conference is added in mark.
3. according to the method described in claim 2, it is characterized in that, the multi-conference mark and described first that sends is used Family is identified to server, and the server is added by first user identifier and identifies identified void with the multi-conference The members list of quasi- session context, comprising:
The multi-conference mark and first user identifier are sent to server, makes the server existing with described When multi-conference identifies identified virtual session scene, first user identifier is added, institute is identified with the multi-conference The members list of the virtual session scene of mark;Alternatively,
The multi-conference mark and first user identifier are sent to server, the server is being not present with described When multi-conference identifies identified virtual session scene, creation identifies identified virtual session field with the multi-conference Scape, and the members list for the virtual session scene that first user identifier addition is created.
4. the method according to claim 1, wherein the expression in the identification head image data is special Sign, obtains expression data, comprising:
It identifies the expressive features in the head image data, obtains expression type and corresponding expressive features value;
The expression data for expressive features value corresponding with the expression type that identification obtains that generation includes.
5. according to the method described in claim 4, it is characterized in that, the expression type that identification obtains belongs to default expression class Type set;
The generation includes the expression data of expressive features value corresponding with the expression type that identification obtains, comprising:
For the unidentified expression type arrived in default expression set of types conjunction, the expression for indicating not trigger corresponding facial expressions and acts is assigned Characteristic value;
According to the preset order of each expression type in the default expression set of types conjunction, each expression type is respectively corresponded to Expressive features value combination, constitute expression data.
6. method according to claim 4 or 5, which is characterized in that described that the expression data is sent to described in addition The corresponding terminal of the second user mark of virtual session scene, makes the terminal in the virtual session scene, control with The corresponding virtual session member of first user identifier triggers facial expressions and acts represented by the expression data, comprising:
The expression data is sent to, terminal corresponding to the second user mark of the virtual session scene is added, made described Terminal extracts expressive features value corresponding with the expression type recognized from the expression data, in the virtual session field Jing Zhong controls virtual session member corresponding with first user identifier and triggers represented by the expressive features value extracted Facial expressions and acts.
7. the method according to any one of claims 1 to 5, which is characterized in that the method also includes:
Obtain the expression data that the corresponding terminal of the second user mark is sent;
In the virtual session scene, controls and identify the institute that corresponding virtual session member triggers acquisition with the second user State facial expressions and acts represented by expression data.
8. the method according to any one of claims 1 to 5, which is characterized in that in first by currently logging in User identifier is added before corresponding virtual session scene, the method also includes:
Obtain user's face image data corresponding to the first user identifier currently logged in;
According to the user's face image data and virtual session member's initial model, generate corresponding with first user identifier Virtual session member.
9. the method according to any one of claims 1 to 5, which is characterized in that the virtual session member is three-dimensional empty Quasi- session members, the virtual session scene are three-dimensional session context;
The method also includes:
Obtain three-dimensional session members corresponding with second user mark;
Determine distributing position of the three-dimensional session members in the three-dimensional session context;
Obtain three-dimensional background model corresponding with the three-dimensional session context;
The three-dimensional session members are distributed in corresponding distributing position, and are combined with the three-dimensional background model It has been shown that, constitutes the three-dimensional session context.
10. according to the method described in claim 9, it is characterized in that, the method also includes:
Detection acts on the touch control operation on the three-dimensional session context, obtains touch trajectory;
The touch trajectory is mapped as to the motion track of the point of observation in the three-dimensional session context;
According to the motion track, the position of the point of observation after determining movement;
According to the position of the point of observation after the movement, the three-dimensional background model and the three-dimensional session members are carried out Projection Display.
11. a kind of interaction data processing method, which comprises
Corresponding virtual session scene is added by the second user mark currently logged in;
Receive the expression data that terminal corresponding with the first user identifier that the virtual session scene is added is sent;
Expressive features value is extracted from the expression data;
In the virtual session scene, controls virtual session member corresponding with first user identifier and trigger the expression Facial expressions and acts represented by characteristic value.
12. according to the method for claim 11, which is characterized in that described in the virtual session scene, control and institute It states the corresponding virtual session member of the first user identifier and triggers facial expressions and acts represented by the expressive features value, comprising:
Determine expression type corresponding with the expressive features value extracted;
In the virtual session scene, according to the institute of expression control logic code corresponding to determining expression type and extraction It states expressive features value, controls the corresponding facial expressions and acts of corresponding with first user identifier virtual session member implementation;With/ Or,
Corresponding texture information is generated according to the expressive features value and the corresponding expression type, in the virtual session In scene, the texture information is showed in the expression displaying portion of virtual session member corresponding with first user identifier Position.
13. according to the method for claim 11, which is characterized in that the second user mark by currently logging in is added Corresponding virtual session scene, comprising:
Obtain the corresponding multi-conference mark of the second user mark currently logged in;
It sends the multi-conference mark and the second user is identified to server, make the server by the second user The members list that identified virtual session scene is identified with the multi-conference is added in mark.
14. according to the method for claim 13, which is characterized in that described to send the multi-conference mark and described second User identifier identifies the server second user mark addition with the multi-conference to server The members list of virtual session scene, comprising:
It sends the multi-conference mark and the second user is identified to server, make the server existing with described When multi-conference identifies identified virtual session scene, the second user is identified and is added with multi-conference mark institute The members list of the virtual session scene of mark;Alternatively,
It sends the multi-conference mark and the second user is identified to server, the server is being not present with described When multi-conference identifies identified virtual session scene, creation identifies identified virtual session field with the multi-conference Scape, and the second user is identified to the members list that the virtual session scene of creation is added.
15. method described in any one of 1 to 14 according to claim 1, which is characterized in that the virtual session member is three-dimensional Virtual session member, the virtual session scene are three-dimensional session context;
The method also includes:
Obtain three-dimensional session members corresponding to the user identifier in the members list of the virtual session scene;
Determine distributing position of the three-dimensional session members in the three-dimensional session context;
Obtain three-dimensional background model corresponding with the three-dimensional session context;
The three-dimensional session members are distributed in corresponding distributing position, and are combined with the three-dimensional background model It has been shown that, constitutes the three-dimensional session context.
16. according to the method for claim 15, which is characterized in that the method also includes:
Detection acts on the touch control operation on the three-dimensional session context, obtains touch trajectory;
The touch trajectory is mapped as to the motion track of the point of observation in the three-dimensional session context;
According to the motion track, the position of the point of observation after determining movement;
According to the position of the point of observation after the movement, the three-dimensional background model and the three-dimensional session members are carried out Projection Display.
17. a kind of interaction data processing unit, which is characterized in that described device includes:
Module is added, corresponding virtual session scene is added for the first user identifier by currently logging in;
Image capture module, for acquiring head image data;
Expression Recognition module, the expressive features in the head image data, obtain expression data for identification;
Control module is added corresponding to the second user mark of the virtual session scene for the expression data to be sent to Terminal, make the terminal in the virtual session scene, control corresponding with first user identifier virtual session at Member triggers facial expressions and acts represented by the expression data.
18. a kind of interaction data processing unit, which is characterized in that described device includes:
Module is added, corresponding virtual session scene is added by the second user mark currently logged in;
Human facial feature extraction module receives what terminal corresponding with the first user identifier that the virtual session scene is added was sent Expression data;Expressive features value is extracted from the expression data;
Control module controls virtual session member touching corresponding with first user identifier in the virtual session scene Facial expressions and acts represented by sending out expressive features value described.
19. a kind of computer equipment, including memory and processor, it is stored with computer-readable instruction in the memory, institute When stating computer-readable instruction and being executed by the processor, so that the processor is executed as in claim 1 to 5 or 11 to 14 Any one of the method the step of.
20. a kind of storage medium for being stored with computer-readable instruction, the computer-readable instruction is handled by one or more When device executes, so that one or more processors execute in such as claim 1 to 5 or the step of any one of 11 to 14 the methods Suddenly.
CN201710458909.5A 2017-06-16 2017-06-16 Interactive data processing method and device, computer equipment and storage medium Active CN109150690B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710458909.5A CN109150690B (en) 2017-06-16 2017-06-16 Interactive data processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710458909.5A CN109150690B (en) 2017-06-16 2017-06-16 Interactive data processing method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109150690A true CN109150690A (en) 2019-01-04
CN109150690B CN109150690B (en) 2021-05-25

Family

ID=64830555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710458909.5A Active CN109150690B (en) 2017-06-16 2017-06-16 Interactive data processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109150690B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110401810A (en) * 2019-06-28 2019-11-01 广东虚拟现实科技有限公司 Processing method, device, system, electronic equipment and the storage medium of virtual screen
CN110418095A (en) * 2019-06-28 2019-11-05 广东虚拟现实科技有限公司 Processing method, device, electronic equipment and the storage medium of virtual scene
CN111444389A (en) * 2020-03-27 2020-07-24 焦点科技股份有限公司 Conference video analysis method and system based on target detection
CN114598738A (en) * 2022-02-22 2022-06-07 网易(杭州)网络有限公司 Data processing method, data processing device, storage medium and computer equipment
WO2023082737A1 (en) * 2021-11-12 2023-05-19 腾讯科技(深圳)有限公司 Data processing method and apparatus, and device and readable storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101127737A (en) * 2007-09-25 2008-02-20 腾讯科技(深圳)有限公司 Implementation method of UI, user terminal and instant communication system
US20090204908A1 (en) * 2008-02-11 2009-08-13 Ganz Friends list management
CN101635705A (en) * 2008-07-23 2010-01-27 上海赛我网络技术有限公司 Interaction method based on three-dimensional virtual map and figure and system for realizing same
CN101908232A (en) * 2010-07-30 2010-12-08 重庆埃默科技有限责任公司 Interactive scene simulation system and scene virtual simulation method
CN102142154A (en) * 2011-05-10 2011-08-03 中国科学院半导体研究所 Method and device for generating virtual face image
CN103368816A (en) * 2012-03-29 2013-10-23 深圳市腾讯计算机系统有限公司 Instant communication method based on virtual character and system
CN103368929A (en) * 2012-04-11 2013-10-23 腾讯科技(深圳)有限公司 Video chatting method and system
CN105653012A (en) * 2014-08-26 2016-06-08 蔡大林 Multi-user immersion type full interaction virtual reality project training system
CN105797374A (en) * 2014-12-31 2016-07-27 深圳市亿思达科技集团有限公司 Method for giving out corresponding voice in following way by being matched with face expressions and terminal
CN105797376A (en) * 2014-12-31 2016-07-27 深圳市亿思达科技集团有限公司 Method and terminal for controlling role model behavior according to expression of user
CN106326678A (en) * 2016-09-13 2017-01-11 捷开通讯(深圳)有限公司 Sample room experiencing method, equipment and system based on virtual reality
CN106598438A (en) * 2016-12-22 2017-04-26 腾讯科技(深圳)有限公司 Scene switching method based on mobile terminal, and mobile terminal
CN106652015A (en) * 2015-10-30 2017-05-10 深圳超多维光电子有限公司 Virtual figure head portrait generation method and apparatus

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101127737A (en) * 2007-09-25 2008-02-20 腾讯科技(深圳)有限公司 Implementation method of UI, user terminal and instant communication system
US20090204908A1 (en) * 2008-02-11 2009-08-13 Ganz Friends list management
CN101635705A (en) * 2008-07-23 2010-01-27 上海赛我网络技术有限公司 Interaction method based on three-dimensional virtual map and figure and system for realizing same
CN101908232A (en) * 2010-07-30 2010-12-08 重庆埃默科技有限责任公司 Interactive scene simulation system and scene virtual simulation method
CN102142154A (en) * 2011-05-10 2011-08-03 中国科学院半导体研究所 Method and device for generating virtual face image
CN103368816A (en) * 2012-03-29 2013-10-23 深圳市腾讯计算机系统有限公司 Instant communication method based on virtual character and system
CN103368929A (en) * 2012-04-11 2013-10-23 腾讯科技(深圳)有限公司 Video chatting method and system
CN105653012A (en) * 2014-08-26 2016-06-08 蔡大林 Multi-user immersion type full interaction virtual reality project training system
CN105797374A (en) * 2014-12-31 2016-07-27 深圳市亿思达科技集团有限公司 Method for giving out corresponding voice in following way by being matched with face expressions and terminal
CN105797376A (en) * 2014-12-31 2016-07-27 深圳市亿思达科技集团有限公司 Method and terminal for controlling role model behavior according to expression of user
CN106652015A (en) * 2015-10-30 2017-05-10 深圳超多维光电子有限公司 Virtual figure head portrait generation method and apparatus
CN106326678A (en) * 2016-09-13 2017-01-11 捷开通讯(深圳)有限公司 Sample room experiencing method, equipment and system based on virtual reality
CN106598438A (en) * 2016-12-22 2017-04-26 腾讯科技(深圳)有限公司 Scene switching method based on mobile terminal, and mobile terminal

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110401810A (en) * 2019-06-28 2019-11-01 广东虚拟现实科技有限公司 Processing method, device, system, electronic equipment and the storage medium of virtual screen
CN110418095A (en) * 2019-06-28 2019-11-05 广东虚拟现实科技有限公司 Processing method, device, electronic equipment and the storage medium of virtual scene
CN110418095B (en) * 2019-06-28 2021-09-14 广东虚拟现实科技有限公司 Virtual scene processing method and device, electronic equipment and storage medium
CN111444389A (en) * 2020-03-27 2020-07-24 焦点科技股份有限公司 Conference video analysis method and system based on target detection
WO2023082737A1 (en) * 2021-11-12 2023-05-19 腾讯科技(深圳)有限公司 Data processing method and apparatus, and device and readable storage medium
CN114598738A (en) * 2022-02-22 2022-06-07 网易(杭州)网络有限公司 Data processing method, data processing device, storage medium and computer equipment

Also Published As

Publication number Publication date
CN109150690B (en) 2021-05-25

Similar Documents

Publication Publication Date Title
CN109150690A (en) Interaction data processing method, device, computer equipment and storage medium
US11595617B2 (en) Communication using interactive avatars
JP6616288B2 (en) Method, user terminal, and server for information exchange in communication
CN108874114B (en) Method and device for realizing emotion expression of virtual object, computer equipment and storage medium
TW201832051A (en) Method and system for group video conversation, terminal, virtual reality apparatus, and network apparatus
CN108305317A (en) A kind of image processing method, device and storage medium
CN113014471B (en) Session processing method, device, terminal and storage medium
CN107210949A (en) User terminal using the message service method of role, execution methods described includes the message application of methods described
CN107924575A (en) The asynchronous 3D annotations of video sequence
WO2022252866A1 (en) Interaction processing method and apparatus, terminal and medium
CN110213521A (en) A kind of virtual instant communicating method
CN106156237B (en) Information processing method, information processing unit and user equipment
CN109428859A (en) A kind of synchronized communication method, terminal and server
CN108616712A (en) A kind of interface operation method, device, equipment and storage medium based on camera
JP6563580B1 (en) Communication system and program
CN110536095A (en) Call method, device, terminal and storage medium
CN109670385A (en) The method and device that expression updates in a kind of application program
CN102780649A (en) Method, client and system for filling instant image in instant communication message
CN109260710A (en) A kind of game APP optimization method, device and terminal device based on mood
CN109039851B (en) Interactive data processing method and device, computer equipment and storage medium
CN108965101A (en) Conversation message processing method, device, storage medium and computer equipment
CN108615261A (en) The processing method, processing unit and storage medium of image in augmented reality
CN108989268A (en) Session methods of exhibiting, device and computer equipment
CN107925657A (en) Via the asynchronous session of user equipment
CN107832366A (en) Video sharing method and device, terminal installation and computer-readable recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant