WO2023142415A1 - 社交互动方法、装置、设备及存储介质、程序产品 - Google Patents

社交互动方法、装置、设备及存储介质、程序产品 Download PDF

Info

Publication number
WO2023142415A1
WO2023142415A1 PCT/CN2022/109448 CN2022109448W WO2023142415A1 WO 2023142415 A1 WO2023142415 A1 WO 2023142415A1 CN 2022109448 W CN2022109448 W CN 2022109448W WO 2023142415 A1 WO2023142415 A1 WO 2023142415A1
Authority
WO
WIPO (PCT)
Prior art keywords
social
virtual
scene
target
room
Prior art date
Application number
PCT/CN2022/109448
Other languages
English (en)
French (fr)
Inventor
卢欣琪
李安琪
王雁
付士成
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to US18/324,593 priority Critical patent/US20230298290A1/en
Publication of WO2023142415A1 publication Critical patent/WO2023142415A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/101Collaborative creation, e.g. joint development of products or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/01Customer relationship services
    • G06Q30/015Providing customer assistance, e.g. assisting a customer within a business location or via helpdesk
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0277Online advertisement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • H04L51/046Interoperability with other network applications or services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q2220/00Business processing using cryptography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information

Definitions

  • the present application relates to the field of computer technology, and in particular to a social interaction method, device, equipment, storage medium, and program product.
  • Embodiments of the present application provide a social interaction method, device, device, storage medium, and program product, which can improve the efficiency of human-computer interaction in social applications.
  • the embodiment of the present application provides a social interaction method, executed by a terminal, including:
  • a target avatar corresponding to a target social object is displayed in the first virtual social scene, and the target social object is an object controlled by the terminal.
  • an embodiment of the present application provides a social interaction device, including:
  • a display module configured to display a social service page, where a first real room is displayed on the social service page, and the first real room corresponds to a first virtual social scene;
  • a processing module configured to display a target avatar corresponding to a target social object in the first virtual social scene in response to receiving an interactive operation on the first virtual social scene, where the target social object is to display the social The object that serves the terminal master of the page.
  • an embodiment of the present application provides a computer device, including: a memory, a processor, and a network interface, the processor is connected to the memory and the network interface, wherein the network interface is used to provide a network communication function, and the memory It is used to store program codes, and the above-mentioned processor is used to call the program codes to execute the methods in the embodiments of the present application.
  • an embodiment of the present application provides a computer-readable storage medium, including: a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the method in the embodiment of the present application is implemented.
  • an embodiment of the present application provides a computer program product or computer program
  • the computer program product or computer program includes computer instructions
  • the computer instructions are stored in a computer-readable storage medium
  • the processor of the computer device can read from the computer
  • the storage medium reads and executes the computer instruction, so that the computer device executes the method in the embodiment of the present application.
  • the first real-scene room can be displayed on the social service page, and the first real-scene room displays the first virtual social scene.
  • the target avatar corresponding to the target social object may be displayed in the first real scene room.
  • the target social object can enter the first real-scene room to participate in social interaction through the target avatar, so that the target social object has more intuitive visual feedback, enhances the immersion of the target social object, and improves the reality of social interaction sense and fun.
  • a real-scene room and a virtual social scene corresponding to the real-scene room are provided in the social application, so that the interactive operation can be received in the virtual social scene and the interaction corresponding to the interactive operation can be realized, and different interactions are integrated in the In the same live room, switching back and forth between multiple social interfaces is avoided, and the efficiency of human-computer interaction in social applications is improved.
  • FIG. 1 is a structural diagram of a social interaction system provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a social interaction method provided by an embodiment of the present application
  • Fig. 3 is a schematic diagram of a social service page provided by an embodiment of the present application.
  • FIG. 4 is a schematic flow diagram for judging whether a social conversation message contains target interaction content provided by an embodiment of the present application
  • FIG. 5 is a schematic diagram of an interactive operation initiated by a target social object on an avatar provided by an embodiment of the present application
  • Fig. 6 is a schematic flow chart of judging the prop attributes of virtual props provided by the embodiment of the present application.
  • Fig. 7 is a schematic flow chart of displaying a virtual image of a target provided by an embodiment of the present application.
  • Fig. 8 is a schematic diagram of a target avatar display state provided by an embodiment of the present application.
  • Fig. 9 is a schematic diagram of a theme-customized room provided by an embodiment of the present application.
  • Fig. 10 is a schematic diagram of an activity page of a social activity provided by an embodiment of the present application.
  • Fig. 11 is a schematic flowchart of another social interaction method provided by the embodiment of the present application.
  • Fig. 12 is a schematic flow diagram of switching a real-scene room provided by an embodiment of the present application.
  • Fig. 13 is a schematic diagram of a layered loading provided by an embodiment of the present application.
  • Fig. 14 is a schematic diagram of a light and shadow mode rendering provided by an embodiment of the present application.
  • Fig. 15 is a schematic structural diagram of a social interaction device provided by an embodiment of the present application.
  • FIG. 16 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • a social client can refer to a social APP (Application, application program) that corresponds to a server and provides local services for customers.
  • a social client can include but is not limited to: instant messaging APP, map social APP , content interaction APP, game social APP, installation-free APP (an application that can be used without downloading and installing, users can open and use it by scanning or searching, such as small programs), etc.; social client can also be Refers to websites that correspond to servers and provide local services for customers with social conversation functions, such as social networking sites, forums, and so on.
  • AIO The full English name is All In One, which refers to the interactive scene/window when users chat with friends.
  • an APP with a social conversation function for example, in some instant messaging applications, users participate in many different types of conversations such as friends, groups, and public accounts.
  • Social service page refers to a service page based on social functions, and the content presented on the social service page can be two-dimensional or three-dimensional.
  • the business form of a social application is mainly based on the flat UI (User Interface, also known as the user interface) page jump, and the information content and the status of the host and guest social objects are all carried by the UI.
  • the social service page includes a three-dimensional real scene room and a two-dimensional or three-dimensional UI function control.
  • the UI function control is a two-dimensional control, such as a message input field
  • the input operation in the message input field is received
  • the UI function control is a three-dimensional control, such as a prop control in a three-dimensional real-scene room
  • a control operation on the avatar in the three-dimensional real-scene room is received to control the avatar to touch the prop control.
  • a room refers to a virtual logical space with certain functions, which can be a flat space presented on the page, or a three-dimensional space presented on the page.
  • a room can also be called a room, channel, community, circle, lobby, etc.
  • a chat room can also be called a chat room
  • a live broadcast room can also be called a live channel.
  • the real scene room in the embodiment of this application refers to a three-dimensional simulation building that carries the corresponding business functions and occupies a certain three-dimensional space in the three-dimensional exploration space.
  • the real scene room is a three-dimensional virtual logical space with certain functions.
  • the virtual logical space created by the account of at least one social object in the social application is a real room.
  • a real room is equivalent to a room.
  • Virtual social scene refers to the scene of social communication based on the Internet, which may include but not limited to: individual social conversation scene, group social conversation scene, game competition scene, social operation activity scene, etc.
  • the single social conversation scene refers to a scene where two social objects conduct a social conversation.
  • a group social conversation scenario refers to a scenario in which two or more social objects participate in a social conversation.
  • the game competition scene refers to a scene in which more than two social objects are divided into at least two camps, and the at least two camps conduct a game competition.
  • the social operation activity scenario refers to the operation activity scenario initiated by the server background, or initiated by any one or more social objects, and inviting other social objects to participate.
  • Virtual image usually refers to a virtual character image used for social interaction on the Internet.
  • the expression, demeanor, and dress of the virtual character image can be personalized.
  • the avatar refers to an avatar used to represent a social object for social interaction in a virtual social scene. It is worth noting that, in the above examples, the avatar is an avatar as an example for illustration. In some embodiments, the avatar may also be a virtual animal image, a virtual animation image, etc., which is not limited in this embodiment.
  • Meshbox refers to the container for placing the model in the three-dimensional (3-dimension, 3D) engineering scene.
  • the model can include characters, buildings, objects, etc.
  • 3D scene assets Refers to the materials that need to be loaded and rendered to build scenes in 3D projects.
  • FIG. 1 is a schematic structural diagram of a social interaction system provided by an embodiment of the present application.
  • the architecture diagram includes a computer device 10 and a server 12 , and the computer device 10 and the server 12 communicate through a network.
  • the computer device 10 can establish a communication connection with the server 12 in a wired or wireless manner, and perform data interaction with the server 12 .
  • Computer equipment refers to the equipment used by social objects participating in social interaction, which may include but not limited to smartphones, tablet computers, smart wearable devices, smart voice interaction devices, smart home appliances, personal computers, vehicle terminals, etc. , this application is not limited to this.
  • the application does not limit the number of computer devices.
  • the server 12 can be an independent physical server, or a server cluster or a distributed system composed of multiple physical servers, and can also provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, Cloud servers for basic cloud computing services such as middleware services, domain name services, security services, CDN, and big data and artificial intelligence platforms, but are not limited thereto.
  • the computer device 10 refers to the device used by the target social object, and the target social object may refer to any social object participating in social interaction.
  • a social client is installed and run in the computer device 10, and the social objects can perform social interaction with other social objects based on the social client running in the respective computer devices, for example: social objects A can use the social client running in the computer device 10 to perform social interaction with the social object B; for another example: social object A, social object B and social object C can use the social client running in their respective computer devices to perform social interaction.
  • the social client can provide a social service page, and the social service page is used to present one or more real-view rooms, and each real-view room is used to display a virtual social scene.
  • the target social object can flexibly switch the real scene room to browse different virtual social scenes.
  • the target social object can be in multiple real scene rooms at the same time, and the virtual social scene corresponding to multiple real scene rooms can be displayed on the computer device at the same time.
  • Small computer devices can display virtual social scenes corresponding to multiple real-scene rooms through the foreground, background, etc.; social interaction with other social objects.
  • the server 12 may be a background server corresponding to the social client, and is used to manage the social client and provide service support for it.
  • the service support may include but is not limited to: recommending Various real-scene rooms, so as to achieve the purpose of recommending various virtual social scenes; forward conversation messages for social clients participating in social interaction; synchronize the location information of avatars in virtual social scenes for each social client, and so on.
  • the computer device 10 displays the social service page in the social client, and the social service page includes a first real room, which corresponds to the first virtual social scene, that is, the first real room is used to display the first virtual social scene.
  • the first virtual social scene includes one or more virtual props, such as desk virtual props, bench virtual props, high-rise virtual props, traffic virtual props, etc.
  • Different virtual social scenes have different virtual prop layouts, and the virtual prop layout can be Adjust according to the scene, and different virtual props can also be bound to different interactive functions.
  • the first real room can be any real room in the social client
  • the first virtual social scene can be any virtual social scene recommended by the server 12 to the target social object
  • the first virtual social scene can include but Not limited to: group social conversation scenarios, game competition scenarios, social operation activity scenarios, etc. It can be understood that the "real scene" mentioned in the embodiments of the present application is used to describe a virtual room simulation or a design close to a real scene.
  • an interactive operation can be initiated on the first virtual social scene, for example: click on the virtual social scene in the first virtual social scene props; or, chatting with existing avatars in the first virtual social scene and so on.
  • the target social object can be added to the first virtual social scene, and then, the computer device 10 displays the target avatar corresponding to the target social object in the first virtual social scene; after that, the target avatar represents the target social The subjects socially interact in a first virtual social scene.
  • the real room in the social service page displayed by the computer device 10 supports switching.
  • the target social object can switch the real room in the social service page through a room switching operation (for example, performing an up and down sliding operation on the social service page).
  • the real-view room can be automatically switched according to the display duration of the live-view room being displayed on the social service page; another example: when the live-view room being displayed on the social service page has no social interaction operation for a long time, the live-view room can be automatically switched or closed Actual room.
  • the social service page can dynamically and timely adjust the recommended content, which not only gives the target social objects an interactive content recommendation experience in the form of video streaming, but also guides the target social objects to participate in more virtual social interactions Social interaction in the scene, thereby increasing the social interaction rate.
  • the first real-scene room can be displayed on the social service page, and the first real-scene room displays the first virtual social scene.
  • the target avatar corresponding to the target social object may be displayed in the first real scene room.
  • the target social object can enter the first real-scene room to participate in social interaction through the target avatar, so that the target social object has more intuitive visual feedback, enhances the immersion of the target social object, and improves the reality of social interaction sense and fun.
  • the first real-scene room on the social service page also supports flexible switching.
  • Such a flexible switching method can enable target social objects to browse more virtual social scenes conveniently, and give target social objects a content recommendation experience in the form of video streaming. , and at the same time, it can guide the target social objects to participate in more virtual social scenes for social interaction, thereby increasing the social interaction rate.
  • FIG. 2 is a schematic flowchart of a social interaction method provided by an embodiment of the present application, and the method may be executed by a computer device.
  • the method may include, but is not limited to, the following steps:
  • a first real-scene room is displayed on the social service page, and the first real-scene room corresponds to a first virtual social scene, which includes but not limited to: a group social conversation scene, a game competition scene, a social operation activity scene, and the like.
  • the first virtual social scene may include one or more avatars for social interaction, wherein the avatars are respectively used to represent a social object for social interaction in the first virtual social scene. It can be understood that whether the first virtual social scene contains avatars and the number of virtual images included depends on the needs of the scene or the actual situation of the scene.
  • the The first virtual social scene may include the avatars corresponding to some or all of the social objects in the group, or may only include the avatars corresponding to the online social objects in the group, or may only include the avatars in the group in the preset
  • the avatars respectively corresponding to the social objects who have sent the social session within the time period are assumed.
  • Each social object can create a real room, which is a virtual 3D room created by social objects, and other social objects or friend objects can enter.
  • the real room can be considered as a 3D real room
  • social objects can chat together, watch videos together, listen to music together, play games together, etc. through avatars.
  • the creator of the live room can also encrypt the room, and other social objects can enter the real room after entering the correct password.
  • the creator can also set the real room to be closed for a certain period of time. During the period of time, the live room becomes a private space.
  • the creator will receive a reminder request to join the room of the new social object.
  • the room joining reminder request includes information such as the identity of the newly added social object, and the creator can choose to receive the user entering the room as required.
  • the target social object who wants to join the closed real-world room completes part or all of the operations, it can also automatically respond to the target social object’s joining operation, and receive the target social object to enter the real-world room currently in the private space. .
  • both the first real-scene room and the virtual image can be displayed in 3D.
  • FIG. 3 is a schematic diagram of a social service page provided by an embodiment of the present application.
  • the social service page 300 displays a first real room
  • the first real room displays a first virtual social scene.
  • the first virtual social scene there are three avatars (respectively avatar 301 , avatar 302 , and avatar 303 ) for social interaction, and the three avatars respectively represent three social objects.
  • each real room can be presented in the form of an information flow, and the target social object can be slid up and down on the information flow (the operation is not limited to this, for example, when the display screen is large, it can also be slid left and right) Navigate through various real-life rooms on the screen.
  • the social service page When the target social object browses to the first real-scene room by sliding the screen, the social service page will display the 3D scene in the first real-scene room, the existing avatar and the actions that the existing avatar is or has performed, etc., for example,
  • the 3D scene (the first virtual social scene) presented in the first real room on the social service page includes: two avatars watching TV together and talking at the same time, and one of the avatars can also be performing cleaning at the same time hygienic action.
  • the target social object can not only watch images and videos, but also play audio during the browsing process, so that the target social object can hear the sound, for example, the above-mentioned first virtual social scene of watching TV , the two avatars can watch TV on behalf of social objects, and the TV can be presented in the first real-scene room in the form of virtual TV props, and the target social objects can also hear the audio content played in the virtual TV props.
  • the target social object can stay and watch the video content played by touching the virtual TV prop.
  • the target social object wants to adjust the viewing angle, or wants to enlarge the video screen content played in the virtual TV prop, he can enter the first real scene room by touching the virtual TV prop or other virtual props in the first real scene room, or touch the A certain avatar in the room enters the first real-scene room, or enters the first real-scene room by inputting voice or text in the message input field of the first real-scene room, or by adjusting the virtual control on the social service page, to adjust the position and orientation of the target avatar corresponding to the target social object in the first real scene room, and then enter the first real scene room.
  • the target social object browses the first real-scene room
  • the target avatar corresponding to the target social object will not be displayed in the first real-scene room.
  • the avatar or virtual prop initiates an interactive operation, it will enter the first real-scene room, that is, the target avatar corresponding to the target social object will be displayed in the first real-scene room.
  • the target social object browses the 3D scene of the first real room
  • only part of the interactive effects in the first real room can be experienced, such as: being able to watch all of the 3D scene of the first real room /Part of the image, watch all/part of the video in the 3D scene of the first real scene room, and can experience all the interactive effects in the first real scene room after the target avatar of the target social object enters the first real scene room.
  • S202 In response to receiving an interactive operation on the first virtual social scene, display a target avatar corresponding to the target social object in the first virtual social scene.
  • the target social object is an object controlled by the terminal, wherein the target social object is an object controlled by the terminal displaying the social service page.
  • the target social object When the target social object actively triggers an interactive operation for the first virtual social scene, it means that the target social object is interested in the first virtual social scene and wants to join the first virtual social scene for social interaction, then in the first real scene
  • the target avatar corresponding to the target social object is displayed in the room.
  • the target social object if the target social object does not trigger an interactive operation on the first virtual social scene, then the target social object acts as a bystander, and checks the conversation message input by multiple parties, and can view the social conversation flow as shown in Figure 3 310. At the same time, the target social object can also view one or more avatars of the social object displayed in the page area 3051 .
  • the interactive operation of the target social object on the first virtual social scene may be an operation of controlling a virtual control widget.
  • a virtual control control is displayed in the first real-scene room, and in response to receiving a trigger operation on the virtual control control in the first real-scene room, the target virtual environment corresponding to the target social object is displayed in the first virtual social scene. image.
  • a virtual control control 3011 is displayed in the first real scene room.
  • the virtual control control 3011 can be displayed at any position in the first real-scene room, and can also be displayed at any position on the social service page.
  • the display form of the virtual control control 3011 can be various, which is not limited here.
  • the displacement movement in the room refers to the virtual movement of the avatar on the page or in the first virtual social scene.
  • the computer device receives the interactive operation of the target social object on the first virtual social scene, so that the target avatar corresponding to the target social object can be displayed in the first real scene room.
  • the trigger operation for the virtual control control can be, for example, a long-press operation for the virtual control control. If the time of the long-press operation received on the virtual control control is longer, the target avatar will perform a displacement movement after entering the first real room. the longer the distance.
  • the target social object can control the avatar to move to the designated position by triggering the virtual control control.
  • the moving mode of the target avatar in the first real scene room can be displayed by default for walking or running. It can be understood that, the present application does not limit the moving manner of the target avatar in the first real scene room. It can be understood that the process of controlling the movement of the avatar can also be implemented through other operations, such as some gesture operations, voice operations, and so on.
  • the purpose of controlling the movement of the avatar can also be achieved by sliding the joystick control in multiple directions, such as turning on the joystick control to control the orientation and orientation of the avatar, and sliding left, right, up and down , to control the corresponding avatar to move continuously in the left, right, up and down directions, and stop moving when the touch of the joystick control is released.
  • the interaction operation of the target social object with respect to the first virtual social scene may be an operation of sending a social conversation message with respect to the first virtual social scene.
  • the computer device receives the interactive operation of the target social object for the first virtual social scene, thereby displaying the target avatar corresponding to the target social object in the first real room .
  • the social conversation flow of the first virtual social scene can be displayed on the social service page, and the social conversation flow can include one or more social conversation messages generated when multiple avatars interact socially.
  • the conversation stream includes social conversation messages generated by social objects in the first virtual social scene.
  • the social conversation message may be any one or a combination of text, emoticon, voice, video, image, link and the like.
  • the social service page may include a message input column, which is used to input a social session message, and when the social session message is sent, the social session message may be displayed in the social session flow, and optionally, the social session message may be sent
  • the social conversation message may also be displayed around the avatar corresponding to the social object, that is, when the social conversation message sent by the target social object in the first virtual social scene is received, the social conversation message will be displayed around the target avatar in the first real scene room
  • the surroundings of the target avatar may refer to, for example, that the distance between the area where the social conversation messages are displayed and the area where the target avatar is located is within a preset distance range, and the distance between the area where the social conversation messages are displayed and the area where the target avatar is located It may refer to the distance between the center points of the two regions, or may refer to the distance between the closest edges between the two regions, and so on.
  • the social service page 300 includes a social conversation flow 310 and a message input column 320.
  • the target social object can input a social conversation message in the message input column 320.
  • the first real scene The target avatar corresponding to the target social object will be displayed in the room. Additionally, the social conversation messages will be displayed in the social conversation stream 310 .
  • the social conversation message sent by the target social object can also be displayed at a preset position around the target avatar corresponding to the target social object, as shown in FIG.
  • the social conversation message sent by (such as Xiao Li) (the social message is the three words "I'm here") is synchronously displayed around the target avatar 3012 corresponding to Xiao Li in the first virtual social scene (in the target avatar 3012 above shows a text box with content as "I'm coming").
  • a keyboard operation panel can also be displayed on the social service page, and the target social object can edit text, emoticons, voice and other social conversation messages through the keyboard operation panel.
  • the keyboard operation panel can be unfolded or folded. When the keyboard operation panel is unfolded and displayed, it is convenient for the target social object to edit social conversation messages. When the keyboard operation panel is collapsed and displayed, the first virtual social network that can be displayed The area of the scene is larger, which is convenient for the target social object to perform social interaction in the first virtual social scene.
  • the keyboard operation panel 305 shown in (3) in Figure 3 can also be displayed in the social service page 30, and the target social object can be edited by the keyboard operation panel 305 Text, emoticons, voice and other social conversation messages.
  • the keyboard operation panel 305 can be displayed unfolded or folded, and (4) in FIG. 3 shows a schematic diagram of a folded display of the keyboard operation panel.
  • the social service page may also include a message navigation indicator. When the message navigation indicator is selected, a conversation message list may be displayed, which shows social conversations sent by other social objects to the target social object. information.
  • the social service page 300 also includes a message navigation indicator 330 , and when the message navigation indicator 330 is selected, the conversation message list 304 shown in (2) in FIG. 3 can be entered.
  • the target avatar can be controlled to execute a set of object actions corresponding to the target interaction content.
  • the target interaction content can be, for example, a preset interaction content, or it can refer to some specified types interactive content.
  • the preset interactive content refers to a preset instruction tag, or the content contained in the preset semantic library.
  • the instruction tag is a symbol or a symbol set by the target social object to simplify the instruction content.
  • the target social object can set the command mark “%” to indicate the command "jump in place”, if the target social object sends a social conversation message "%”, it can recognize the command "jump in place”, and then control the execution of the target avatar "Jump in situ” object action;
  • the preset semantic library contains the dynamics or expressions of the avatar corresponding to the semantics. If the social conversation message sent by the target social object hits the semantic library, the target avatar can be controlled to execute the corresponding object Actions, for example, if the target social object sends a social conversation message "kiss", "kiss" is the content contained in the preset semantic library, then the target avatar can be controlled to blow a kiss.
  • the avatars corresponding to the multiple social objects that send the social conversation messages can jointly trigger a corresponding
  • the avatars corresponding to the four social objects will be controlled to jointly execute the object action corresponding to the refueling dynamic.
  • the social interaction modes of the target avatar in the first real-scene room are enriched, which is conducive to enhancing the enthusiasm of social objects to participate in the interaction.
  • the first virtual social scene already contains M avatars for social interaction; then, the received interactive operation for the first virtual social scene may be for M avatars in the first virtual social scene
  • An interactive operation initiated by a reference avatar in the avatar the reference avatar can be any one of the M avatars, or a special avatar, for example, the reference avatar can be created by the first
  • the avatar corresponding to the social object in the real-scene room generates an interactive operation initiated on the reference avatar, in response to the interactive operation initiated by the target social object on the reference avatar among the M avatars, in the first real-scene room
  • a target avatar corresponding to the target social object is displayed.
  • the computer device receives the interactive operation of the target social object for the first virtual social scene, so that it can display the target social object in the first real room.
  • the target avatar of the target avatar may be an avatar customized by the target social object.
  • an interactive panel can be triggered to be displayed, and the interactive panel can contain multiple interactive options, such as selecting an action or Viewing the data, when the target social object selects any one of the interaction options, the target avatar corresponding to the target social object can be displayed in the first real scene room. Please refer to FIG.
  • FIG. 4 is a schematic diagram of a target social object initiating an interactive operation on an avatar provided by an embodiment of the present application.
  • the interaction panel 307 corresponding to the avatar 306 is displayed on the social service page.
  • the interaction panel 307 includes three interaction options: viewing information, saying hello, and giving gifts.
  • the target social object selects the interaction option "greeting”
  • the avatar 306 executes the object action corresponding to "greeting”.
  • the avatar 308 corresponding to the target social object is displayed in the first real scene room.
  • the interactive operation of the target social object on the first virtual social scene may be a trigger operation on the virtual props, that is, when the target social object interacts with the virtual props
  • the computer device receives the interactive operation of the target social object on the first virtual social scene, thereby displaying the target avatar corresponding to the target social object in the first real scene room; wherein, the trigger operation of the target social object on the virtual prop Including: any one of click-triggered operations, voice-triggered operations, and gesture-triggered operations.
  • the first virtual social scene includes a virtual prop such as a TV, and a target social object clicks on the TV to initiate an interactive operation on the first virtual social scene.
  • the virtual props in the first virtual social scene can be classified according to the interaction attributes.
  • the virtual props refer to virtual items with specific interaction capabilities in the real room, and each virtual prop has its own interaction Attributes.
  • the virtual props in the first virtual social scene can be the first type of virtual props, and the interactive attributes of the first type of virtual props can be used to indicate the scene form change information of the first virtual social scene, if the first type of virtual props is triggered, the scene form of the first virtual social scene will be updated according to the interactive attributes of the first type of virtual props.
  • Scene shape change information includes but is not limited to changes in video playback ratio in the scene, changes in audio volume, changes in lighting display, etc.
  • the first type of virtual props can be virtual TV props in the first virtual social scene.
  • the TV prop When the TV prop is triggered, it can trigger full-screen playback of the video in the first virtual social scene.
  • the first type of virtual prop can also be a virtual audio prop in the first virtual social scene. If the virtual audio prop is triggered, it can Adjust the volume of the audio in the first virtual social scene.
  • the first type of virtual prop can also be a virtual lamp prop in the first virtual social scene. If the virtual lamp prop is triggered, it can be turned on or turned on in the first virtual social scene. Turn off the light source.
  • the virtual props in the first virtual social scene can also be virtual props of the second type, and the interaction attributes of the second type of virtual props can be used to indicate the object actions that one or more avatars in the first virtual social scene should perform.
  • the second type of virtual prop if the second type of virtual prop is triggered, one or more avatars in the first virtual social scene can be controlled to perform corresponding object actions according to the interactive attributes of the second type of virtual prop.
  • the second type of virtual prop may be a virtual seesaw prop in the first virtual social scene, and the virtual seesaw prop needs to be triggered by two avatars, and any two avatars are designated in the first virtual social scene to trigger the virtual seesaw prop After that, the designated two avatars can be controlled to perform the object action of playing on the virtual seesaw.
  • the virtual props in the first virtual social scene can also be virtual props of the third type, the interaction attribute of the virtual props of the third type is used to indicate the feedback operation supported by the triggered virtual props, and the virtual props of the third type It can refer to props that are not bound to interactive functions or actions of specific objects.
  • the owner of this type of prop can receive the feedback message of the corresponding operation, for example, the owner of this type of prop can be the creator of the first real scene room.
  • the display attribute of the target avatar in response to receiving an interactive operation for the first virtual social scene, may also be obtained first, and the display attribute of the target avatar may be a hidden attribute or an explicit attribute, if If the display attribute of the target avatar is a hidden attribute, the target avatar can be displayed in a transparent state in the first real-scene room; if the display attribute of the target avatar is an explicit attribute, then the target avatar can be displayed in the first real-scene room. The image is displayed in an opaque state.
  • the display attribute of the target avatar can be set by the target social object, and the setting parameters for the display attribute can be configured with a display duration. If the display duration corresponding to the currently set display attribute of the target avatar reaches the preset threshold, it can be switched to another A display attribute.
  • the display attribute of the target avatar can be switched from hidden to explicit after 1 minute. , that is, switch the target avatar from a transparent display state to a non-transparent display state in the first real-scene room.
  • the display attribute of the target avatar can be bound to the virtual prop.
  • the hidden attribute is bound to the virtual hidden prop. If the target avatar carries the virtual hidden prop, the corresponding display attribute is the hidden attribute. , the target avatar is displayed in a transparent state in the first real-scene room, and if the target avatar does not carry virtual hidden props, the target avatar is displayed in an explicit state by default in the first real-scene room.
  • the first real-scene room can be displayed on the social service page, and the first real-scene room displays the first virtual social scene.
  • the target avatar corresponding to the target social object may be displayed in the first real scene room.
  • the target social object can enter the first real-scene room to participate in social interaction through the target avatar, so that the target social object has more intuitive visual feedback, enhances the immersion of the target social object, and improves the reality of social interaction sense and fun.
  • the method provided in this embodiment displays the social conversation messages generated by the social objects in the first virtual social scene by displaying the social conversation flow on the social service page, without triggering other viewing operations to view the social conversation flow, which improves the Human-computer interaction efficiency.
  • the method provided in this embodiment triggers the target avatar to enter the first virtual social scene through the virtual control control, and directly controls the target avatar to enter the first virtual social scene when the target avatar interacts with the first real room, which improves social interaction. efficiency.
  • the target social object initiates an interactive operation with the social object in the first real scene room, that is, to control the target avatar to enter the first virtual social scene, avoiding the need for additional operations to control the target avatar to enter The first virtual social scene improves the efficiency of social interaction.
  • the method provided in this embodiment controls the target avatar to enter the first virtual social scene when a trigger operation for the virtual props in the first virtual social scene is received, avoiding the need to control the target avatar to enter the first virtual social scene through additional operations. scene, improving the efficiency of social interaction.
  • the target avatar when the target social object sends a social conversation message in the first virtual social scene, the target avatar is controlled to enter the first virtual social scene, which improves the relationship between the target social object and the first real scene room on the one hand.
  • Interaction diversity improves the efficiency of human-computer interaction in social interaction.
  • the social conversation information is displayed around the target avatar, so that other avatars in the first virtual social scene can Quickly learn the social messages sent by the target social object, which improves the effectiveness and efficiency of information transmission.
  • judging whether the social session message contains target interactive content may be determined by the method shown in FIG. 5 The steps shown to achieve:
  • the target social object can input a social conversation message on the social service page, and the input social conversation message can be a text message or a voice message, wherein the voice message can be a real-time voice or a recording.
  • the social conversation message input by the target social object is converted into character semantics.
  • other social conversation messages appearing in the social conversation stream may also be converted at the same time.
  • these social conversation messages will be uniformly converted into character semantics.
  • follow-up judges social conversation messages by judging character semantics. For example, judge whether social conversation messages contain command marks by judging whether character semantics contain command marks, and judge social conversation by judging whether character semantics hit the semantic database. Whether the conversation message hits the semantic library, by judging whether the character semantics is a multi-object conversation, it is judged whether the social conversation message is a multi-object conversation.
  • the instruction mark is a symbol or command word set by the target social object to simplify the instruction content, and the content of the social conversation message input by the target social object may contain a preset instruction mark.
  • the props library can contain the mapping relationship between instruction tags and object actions. If the social session message input by the target social object contains instruction tags, it can be further judged whether the object operation corresponding to the instruction tag is bound in the props library. And when the judgment result of S34 is yes, the following S35 is executed, and when the judgment result of S34 is negative, the following S36 is executed.
  • the target avatar can execute the corresponding object action.
  • a prompt message can be returned to remind the target social object that the object action corresponding to the instruction tag has not been recognized, so as to guide the target social object to restart enter.
  • the social conversation message processing is carried out, and then it is judged whether the character semantics hits the social behavior semantic library, that is, the semantic library has virtual character dynamics or expressions corresponding to the semantics.
  • the semantic library can contain semantically corresponding object actions, such as avatar dynamics or expressions. If the social conversation message does not contain instruction tags, it can be judged whether the social conversation message hits the semantic library.
  • the step of judging whether the social conversation message hits the semantic library and the step of judging whether the social conversation message contains instruction tags can be executed in an exchanged order, that is, it is also possible to first judge whether the social conversation message hits the semantic library, and then judge whether the social conversation message Contains directive tags.
  • Multi-object conversation refers to the associated conversation messages generated by the communication of multiple social objects. For example, a social conversation message in which multiple social objects enter the same content can be judged as a multi-object conversation.
  • the session message generated by the reply can also be judged as a multi-object session.
  • the social session message is a multi-object session message
  • the judgment result of S39 is yes, execute the following S40, when the judgment result of S39 is no, then process according to the normal object action, for example, when S39 is no, execute S41, or do not execute the object action .
  • the avatars corresponding to the social objects participating in the multi-object conversation can jointly trigger collective object actions. For example, if more than four social objects send "haha", the avatars corresponding to the four social objects participating in the multi-object conversation can jointly trigger the object action corresponding to "haha" in the semantic database.
  • the target avatar can be directly controlled to execute the corresponding object action in the semantics.
  • the social conversation message does not hit the semantic library, the social conversation message will be displayed in the social conversation flow, and the target avatar will no longer execute the object action.
  • the content of the social conversation message can be displayed through the object action of the target avatar, which not only enriches the presentation result of the social conversation message, but also enriches the display effect of the avatar.
  • the method provided in this embodiment controls the target avatar to execute the corresponding object action when the conversation message sent by the target social object hits a keyword in the semantic library, thereby more intuitively expressing the purpose of the conversation message sent by the target social object Semantics improve the efficiency and effectiveness of information transfer.
  • the method provided in this embodiment controls multiple target avatars to trigger collective object actions when conversation messages sent by multiple social objects hit keywords in the semantic library, thereby improving social diversity.
  • the judgment of the prop attribute of the virtual prop and the interactive content triggered by the prop attribute can be realized through the steps shown in FIG. 6 :
  • the target social object triggers the virtual prop by clicking on the virtual prop.
  • the computer device needs to judge the attributes of the virtual props, so as to trigger the operation entry and link corresponding to the virtual props.
  • the first type of virtual props are scene interactive props. After the first type of virtual props are triggered, the scene shape change information of the virtual social scene can be changed.
  • the first type of virtual props can be bound to specific interactive behaviors. Click the first type of virtual Props can trigger their corresponding behavior links to change the scene shape change information of the virtual social scene.
  • the second type of virtual props refers to props that can only be triggered by the intervention of an avatar, and can be divided into single-object interactive props and multi-object interactive props.
  • the second type of virtual props By triggering the second type of virtual props, one or more avatars in the virtual social scene can be controlled to perform corresponding object actions. If the judgment result of S54 is that it belongs to the multi-object interactive prop, S55 is executed; when the judgment result of S54 is that it does not belong to the multi-object interactive prop, then it is a single-object interactive prop, and the following S56 is executed.
  • the virtual prop is a multi-object interactive prop in the second type of virtual prop, multiple avatars can be designated for triggering.
  • any avatar trigger prop in the virtual social scene can be designated.
  • the third type of virtual props can refer to props that are not bound to specific interactive behaviors, trigger the third type of virtual props, and display the feedback operation options supported by the third type of virtual props. Operation options such as likes and comments.
  • the owner of the third type of virtual prop can receive corresponding operation message feedback, and the object model and rendering effect of the third type of virtual prop can be changed or not affected in any way.
  • S58 Trigger a feedback operation supported by the third type of virtual prop, and the prop owner receives a corresponding operation feedback message.
  • the prop owner will receive a feedback message for the corresponding operation.
  • the scene form change of the first virtual social scene is displayed, and the interaction between the target virtual character and the virtual social scene is increased.
  • the method provided in this embodiment shows the process of one or more avatars performing object actions, which improves the interaction between virtual objects and virtual social scenes. sex.
  • the feedback action of the target avatar to the triggered third type of virtual props is displayed, which improves the interaction between the avatar in the virtual social scene. Interaction diversity between virtual props.
  • displaying the target avatar corresponding to the target social object in the first real scene room can be realized through the steps shown in FIG. 7:
  • the default initial generation point generates a model container corresponding to the target avatar.
  • the computer device can generate a model container corresponding to the target avatar at the default initial point in the first real scene room, for example, a Meshbox can be used as the container of the avatar, and the Meshbox is a container for placing the model in the 3D engineering scene.
  • the target avatar can be The display is not rendered, and only the position of the container is reserved, that is, the target avatar is displayed transparently.
  • the model containers corresponding to the avatars can be generated at the same default initial generation point.
  • a virtual control control can be displayed in the first real scene room, and the virtual control control can be used to control the displacement movement of the avatar in the room. If the computer device detects that the target social object triggers the virtual control control, it can be regarded as the target social object Actively triggering the display of the target avatar in the first real-scene room.
  • the way for the target social object to initiate an interactive operation on any avatar can be to click on the corresponding avatar, or to trigger an object action on any avatar, such as selecting the avatar to perform operations such as greeting or giving gifts. If the computer device detects When the target social object initiates an interactive operation on any avatar, it can be regarded as that the target social object actively triggers the display of the target avatar in the first real scene room.
  • the social service page may contain a message input column, and the target social object edits the social session message in the message input column and then sends it. If the computer device detects that the target social object sends a social session message, it can be considered that the target social object actively triggers the target virtual image in the message input column.
  • the first virtual social scene can contain a variety of virtual props, and the target social object can trigger the virtual prop by clicking the virtual prop. If the computer device detects that the target social object triggers the virtual prop, it can be regarded as the target social object actively triggering the target virtual The display of the image in the first real room.
  • the virtual hidden props can be set with corresponding time parameters, and when the preset time is reached, the virtual hidden props can be regarded as invalid.
  • S76 Start rendering and displaying the target virtual image in the model container corresponding to the target virtual image.
  • the target avatar can be rendered and displayed in the model container corresponding to the target avatar, that is, the target avatar Displayed as opaque.
  • the target avatar will be continuously and transparently displayed. In this way, the object information of the target social object can be effectively protected.
  • FIG. 8 is a schematic diagram of transparent display and non-transparent display of a target avatar provided by the embodiment of the present application, wherein (1) in FIG. 8 is a schematic diagram of a transparent display of a target avatar, and (2) in FIG. 8 Schematic representation of the non-transparent display for the target avatar.
  • the target avatar can be displayed based on a default virtual viewing angle in the first real-scene room.
  • the first virtual viewing angle can be updated according to the change of the virtual viewing angle.
  • the scene content of the virtual social scene in the real room includes any of the following situations:
  • the focal length of the virtual perspective changes; here, the change in the focal length of the virtual perspective may be due to the change caused by the target social object corresponding to the target avatar performing a focus adjustment operation, for example, the social target social object may display the first virtual
  • the gesture zoom operation is performed on the social service page of the social scene to adjust the focal length of the virtual perspective. Following the zoom operation, the page content will be enlarged or reduced.
  • the focus adjustment operation for the virtual perspective can be a gesture zoom operation or a There is no limitation on the triggering operation of the focus adjustment control.
  • the angle of the virtual viewing angle changes; the so-called change in the angle of the virtual viewing angle can be generated after controlling the virtual image to perform a rotation action within the angle range.
  • a social service page can include an angle adjustment control. By triggering the angle adjustment control, you can 360-degree control of the avatar to perform a rotation action, and correspondingly, the scene content in the virtual scene can also be updated and displayed following the change of the avatar's rotation action.
  • the position of the target avatar changes; when the avatar performs displacement movement in the first virtual scene, it can also follow the moving position of the avatar to update and display the scene content of the virtual social scene in the first real scene room, such as the target
  • the avatar moves to the activity area, it will be able to update and display the scene content of the corresponding virtual social scene in the activity area.
  • the first real scene room can be a theme customized room, and the theme can be, for example, a music theme, a game theme, a sports theme, etc.
  • the target avatar corresponding to the target social object is displayed in the first real scene room
  • Options related to customized themes can be output, for example, the option to choose to play songs can be output for music themes
  • the option to select game characters can be output for game themes
  • the option to select sports teams can be output for sports themes, output and customized themes
  • Related options can be displayed in the social service page. Different themes may have different backgrounds (including colors, decorations, etc.), virtual prop layouts, and other differences.
  • the display form of the target avatar may also be updated to a form matching the selected option.
  • the display form of the target avatar may refer to the attire of the target avatar in the first virtual social scene, after the options related to the customized theme are triggered, the attire of the target avatar may be updated to match the theme .
  • the first real-scene room is a sports-themed room
  • an option to select a sports team can be output, and when the target social object selects a sports team, the target avatar can be replaced and selected
  • the display form of the target avatar is enriched, and at the same time, the experience of the target social objects is enhanced.
  • FIG. 9 is a schematic diagram of a theme customized room provided by the embodiment of the present application.
  • the first real scene room may be a sports themed room.
  • the target avatar 309 an option 3110 for selecting a sports team can be output, and when the target social object selects a sports team, the target avatar can replace the corresponding team uniform of the sports team, as shown in (2) in Figure 9, the target virtual image The image 313 is replaced with the uniform of the selected sports team.
  • the first real scene room also contains virtual TV props 311 and virtual sofa props 312.
  • the target social object triggers
  • the virtual TV prop 311 can zoom in and view the screen content in the virtual TV prop 311, and enter the viewing area in the first real scene room to watch and communicate with other avatars.
  • the target social object triggers the virtual sofa prop, which can control The target avatar moves to rest at the virtual sofa prop.
  • the target avatar after the target avatar corresponding to the target social object is displayed in the first real-scene room, the target avatar can be controlled to perform a displacement movement.
  • the first virtual social scene can include a social activity area.
  • the activity page of the social activity When the target avatar moves into When entering the social activity area, the activity page of the social activity can be displayed, so that the target avatar can join the social activity through the activity page.
  • the activity page of the social activity may include an activity operation control corresponding to the social activity, and by triggering the activity operation control, the target avatar may be controlled to perform social activities on the activity page.
  • the social activity area can be a shooting area.
  • an activity page of the shooting area can be displayed, and the activity page of the shooting area can contain an activity operation control for controlling the target avatar to perform a shooting action.
  • the activity operates controls, which can control the target avatar to shoot a basketball.
  • the social activity area can also display avatars corresponding to other social objects participating in social activities, and the target social object can compete with other social objects. In this way, the fun of interaction between social objects can be further enhanced.
  • the social activity area can be, for example, the viewing area 401 in FIG. 10(1).
  • the first virtual scene may further include a social activity entry, which may be an activity link corresponding to a social activity page, and when the social activity entry is triggered, the social activity activity page may also be displayed.
  • the entrance of the social activity may be displayed in the social conversation flow of the first virtual scene, and may also be bound to a specific virtual prop.
  • the corresponding The entrance can be bound to the virtual basketball props, and by triggering the virtual basketball props, the corresponding activity page of the shooting activity can be displayed.
  • FIG. 10 is a schematic diagram of an activity page of a social activity provided by the embodiment of the present application.
  • the content played in can be the screen content in the virtual TV prop 311 displayed after triggering the virtual TV prop 311 shown in (1) in FIG.
  • the activity link shared in the stream can display the activity page 315 of the social activity as shown in (2) in Figure 10 by triggering the entrance 314 of the social activity.
  • the activity page 315 of the social activity may also be displayed after the target avatar moves to the social activity area in the first real scene room.
  • the immersion and seamless experience of social objects in AIO can be enhanced to meet the complex needs of social objects for chatting and emotional expression during social interaction, and at the same time effectively protect the objects of social objects information; on the other hand, with the advantage of virtual image and real room 3D vision, the social interaction between social objects has more intuitive visual feedback.
  • the virtual image By controlling the virtual image to perform different social interaction operations, it not only enriches the virtual image
  • the display effect also enhances the fun of social interaction.
  • Fig. 11 is a schematic flowchart of another social interaction method provided by the embodiment of the present application, the method can be executed by a computer device, and the computer device can include a personal computer, a notebook computer, a smart phone, a tablet computer, a smart watch , smart voice interaction devices, smart home appliances, on-board computer equipment, smart wearable devices, aircraft and other devices with display functions.
  • the method may include, but is not limited to, the following steps:
  • step S401 for the specific implementation manner of step S401, reference may be made to the introduction of the corresponding embodiment in FIG. 2 , and details are not repeated here.
  • S402 Switch the first real scene room to the second real scene room on the social service page.
  • the social service page corresponds to a room display list, which includes a plurality of real-scene rooms to be displayed in an orderly manner, and each real-scene room corresponds to a virtual social scene, which can be displayed on the social service page
  • the first real scene room is switched to the second real scene room.
  • the sorting method in the room display list can be any of the following: (1) sorting according to the matching degree between the virtual social scenes corresponding to multiple real-world rooms and the object tags of the target social objects, for example The matching degree is sorted from high to low.
  • the object tag can include, for example, the interaction preference content set by the target social object, or the content describing the characteristics of the target social object; (2) according to the corresponding The degree of attention of the social scene is sorted, wherein the degree of attention of the social scene may refer to the number of social objects participating in social interaction in the social scene.
  • the first real-scene room displayed on the social service page can be any one in the room list, and in response to a switching operation on the first real-scene room, the first real-scene room can be switched to the first one in the room presentation list.
  • the second real-scene room can be a real-scene room whose sorting position is before the first real-scene room in the room display list, or a real-scene room whose sorting position is after the first real-scene room in the room display list, or a room Show the real rooms randomly obtained from the list.
  • switching the first real-scene room to the second real-scene room on the social service page may include any of the following situations: 1.
  • the The first virtual social scene in the social service page is switched to the second virtual social scene.
  • the room switching operation may include but not limited to sliding operations, gesture operations, control operations, floating gesture operations, voice operations, etc.; 2.
  • the first virtual social scene When the display duration of the social scene reaches the preset duration, the first virtual social scene in the social service page is switched to the second virtual social scene, and the display duration of the first real room can be a fixed value set by the target social object. When the display time reaches the set value, the real-scene room can be automatically switched; 3.
  • the first virtual social scene in the social service page will be switched It is the second virtual social scene.
  • the target social object may not be interested in the social interaction in the first virtual scene.
  • it can also automatically switch Real-scene room, the specific switching situation of the real-scene room can be determined by the setting of the target social object, and there is no limitation on this.
  • the current network environment when the first real-scene room is switched to the second real-scene room on the social service page, the current network environment may be detected, and the display mode of the second real-scene room may be determined based on the current network environment. Specifically, if the current network environment is the first network environment, such as a smooth network environment, you can directly load the second real-view room, and switch the first real-view room to the second real-view room on the social service page; if the current network environment is The second network environment, such as a weak network environment, can display a snapshot page associated with the social service page, and the snapshot page can be that the social service page is updated according to a preset time interval during the historical display process of the social service page in the second live room.
  • the first network environment such as a smooth network environment
  • the weak network environment prompt information can be displayed, so that the target social object can adjust the network environment in time.
  • the second real-scene room can be loaded; if there is currently no network environment, a no-network prompt message can be displayed, and a bottom map will be displayed on the social service page.
  • the first network environment and the second network environment are determined according to the network bandwidth or network speed. When the network bandwidth is lower than a certain bandwidth threshold or the network speed is lower than a certain speed threshold, it is the second network environment, otherwise it is the second network environment. 1. Network environment.
  • the method provided in this embodiment sorts the N real-scene rooms according to the degree of matching with object tags, so as to preferentially display more matching real-scene rooms to the target social objects, and improves the display effectiveness of the real-scene rooms.
  • the method provided in this embodiment sorts the N real-view rooms according to the popularity of attention, thereby preferentially displaying the high-hot real-view rooms to the target social objects, and improving the display accuracy of the real-view rooms.
  • the method provided in this embodiment switches between the first real-scene room and the second real-scene room when a room switch operation is received, thereby improving the switching efficiency between real-scene rooms.
  • the method provided in this embodiment automatically switches between the first real-view room and the second real-view room if no room switching operation is received within the preset time range, which improves the switching efficiency between real-view rooms.
  • switching the first real-scene room to the second real-scene room in the room presentation list can be implemented through the steps shown in FIG. 12 :
  • S92 Determine whether it is a smooth network environment. In the process of switching the first real-view room to the second real-view room, when the loading of the second real-view room is started, it can be judged whether the current network environment is a smooth network environment, so as to invoke different room loading logics according to different network environments. When the judgment result of S92 is that the network environment is smooth, the following S93 is executed. If it is not a smooth network environment, different processes are executed according to the specific conditions of the actual network environment, the following S99 is executed in the case of no network environment, and the following S95 is executed in the case of a weak network environment. Whether it is a smooth network environment can be determined according to the network bandwidth or network speed, and whether it is a no-network environment or a weak network environment can be determined according to the network bandwidth or network speed. Analysis and judgment can be made according to different thresholds.
  • the layered loading logic can be started, and the layered loading logic can include the following two aspects:
  • the target avatar can be generated at the default initial generation point.
  • the target avatar is generated based on the default shooting angle of view of the virtual lens.
  • the second real-scene room When the second real-scene room is initially loaded, it can be displayed based on the default shooting angle of view
  • the near area can be an area within a fixed distance from the target avatar.
  • the display range will be gradually expanded, that is, the depth and field of view of the virtual lens will continue to increase. Expanding, such a progressive loading, until the scene content of the virtual social scene can be displayed globally.
  • FIG. 132 is a schematic diagram of layered loading provided by the embodiment of the present application. As shown in (1) in FIG. The schematic diagram, as shown in (2) in FIG. 13 , is a schematic diagram of the depth and field of view of the virtual camera 1310 expanding as the loading progresses.
  • the display range while gradually expanding the display range, it can also be combined with light and shadow mode layered rendering, which can preferentially load the 3D scene assets in the vicinity of the target avatar, and other non-display range areas Can be displayed in fog state.
  • the light and shadow mode also performs layer-by-layer cache rendering.
  • the scene content can be placed in a white model (that is, uncolored) first, and then pure texture rendering without light and shadow, such as advertising banners, virtual Image decoration and so on belong to pure texture rendering, and then gradually superimpose the global light rendering effect according to the depth of the virtual lens and the continuous expansion of the field of view.
  • FIG. 14 is a schematic diagram of light and shadow mode rendering provided by the embodiment of the present application.
  • (1) in FIG. is a schematic diagram of gradually superimposing light and shadow mode rendering according to the depth of the virtual lens as the loading process progresses; as shown in (3) in Figure 4d, it is a schematic diagram of global superimposed light and shadow mode rendering; as shown in Figure 14 ( As shown in 4), it is a schematic diagram of white model rendering; as shown in (5) in Figure 14, it is a schematic diagram of pure texture rendering; as shown in (6) in Figure 14, it is a schematic diagram of an illumination scene.
  • the layered rendering of the light and shadow mode can be performed while the depth of the virtual lens is processed in parallel.
  • the room snapshot logic can specifically include the following three aspects:
  • a snapshot video of a preset duration can be recorded at the default generation point of the avatar in the second real-scene room, and the snapshot video can be stored locally so that it can be directly recalled and played when needed, and the current
  • the snapshot video corresponding to the time node can cover the snapshot video corresponding to the previous time node, so that it can be ensured that the content of the snapshot video is the latest social interaction situation in the second real scene room.
  • a room display list can be generated, which includes a plurality of orderly to-be-displayed real-scene rooms, for example, the display order of the room list can be based on the object of the target social object
  • the tags are sorted by the matching degree between the real room and the room, for example, the matching degree is sorted from high to low.
  • snapshot videos corresponding to multiple real-view rooms can be pre-downloaded and stored locally for invocation, for example, snapshot videos corresponding to the top three real-view rooms in the sequence can be downloaded.
  • the above-mentioned room snapshot logic can also be used for loading processing when the number of social objects participating in social interaction in the room exceeds a threshold and the social objects need to be queued to enter the room.
  • S96 Play snapshot video. If the current network environment is judged to be a weak network environment, the snapshot video generated based on the above-mentioned room snapshot logic can be played, so that social objects can quickly understand the social interaction in the second real room.
  • S97 Determine whether a smooth network environment has been restored. If the smooth network environment is restored, the execution of the above S93 may be triggered to start the above layered loading logic to load and display the second real scene room. If the smooth network environment has not been restored, the following S98 may be executed.
  • the avatar can be combined with the real room close to the real world, so that social objects can use the avatar to have more intuitive interactive feedback in the interactive virtual social scene, which enhances social interaction. Immersion when interacting with objects.
  • the layered loading logic and room snapshot logic can be used to ensure the smoothness of switching, effectively improving the operational efficiency when switching between real-view rooms, and further improving the experience of social objects.
  • FIG. 15 is a schematic structural diagram of a social interaction device provided by an embodiment of the present application.
  • the above-mentioned social interaction processing device may be a computer program (including program code) running on a computer device, for example, the social interaction device is an application software, and the social interaction device may be used to execute the corresponding method in the method provided by the embodiment of the present application. step.
  • the social interaction device 500 may include: a display module 501 and a processing module 502 .
  • a display module 501 configured to display social service pages
  • the processing module 502 is configured to display a target avatar corresponding to the target social object in the first virtual social scene in response to receiving an interactive operation on the first virtual social scene.
  • the social service page corresponds to a room display list
  • the room display list includes N real-scene rooms to be displayed in an orderly manner, and N is a positive integer; each real-scene room is used to display a virtual social scene; the first The real-scene room is any one in the room display list; wherein, the ordering of the N real-scene rooms in the room display list includes any of the following: according to the relationship between the virtual social scene corresponding to the N real-scene rooms and the object tag of the target social object or sort according to the popularity of the virtual social scenes corresponding to the N real-scene rooms.
  • the room display list also includes a second real-scene room, and the second real-scene room is used to display the second virtual social scene; the processing module 502 is specifically configured to: switch the first virtual social scene on the social service page is the second virtual social scene.
  • the processing module 502 is further configured to: during the display process of the first virtual social scene, in response to receiving a room switching operation, change the first virtual social scene in the social service page to switch to the second virtual social scene; or, when the display duration of the first virtual social scene reaches a preset duration, switch the first virtual social scene in the social service page to the second Virtual social scene.
  • the processing module 502 is further configured to: if the social interaction operation for the first virtual social scene is not received within the range of the preset duration, send the second virtual social scene in the social service page to A virtual social scene is switched to the second virtual social scene; wherein, the room switching operation includes any one of sliding operation, gesture operation, control operation, floating gesture operation, and voice operation.
  • the display module 501 is further configured to display the social conversation flow of the first virtual social scene on the social service page during the display process of the first virtual social scene, the social conversation flow includes social conversation messages generated by social objects in the first virtual social scene.
  • the first real-scene room contains M avatars performing social interactions in the first virtual social scene, where M is a positive integer; the social conversation flow of the first virtual social scene includes M avatars performing social interactions
  • One or more social conversation messages generated when the social conversation message includes: any one or more of text, emoticon, voice, video, image, and link.
  • a virtual control control is displayed in the first real-scene room; the processing module 502 is specifically configured to: respond to receiving a trigger operation on the virtual control control in the first real-scene room, A target avatar corresponding to the target social object is displayed in the virtual social scene.
  • the first real scene room contains M avatars that perform social interaction in the first virtual social scene, and M is a positive integer; the processing module 502 is specifically configured to: respond to receiving the M avatars The interactive operation initiated by the reference avatar in the avatar displays the target avatar corresponding to the target social object in the first virtual social scene.
  • the first virtual scene contains virtual props
  • the processing module 502 is specifically configured to: display the target avatar corresponding to the target social object in the first virtual social scene in response to receiving a trigger operation on the virtual prop ;
  • the trigger operation of the target social object on the virtual prop includes: any one of a click trigger operation, a voice trigger operation, and a gesture trigger operation.
  • the processing module 502 is further configured to, when receiving a sending operation for a social session message in the first virtual social scene, display the message corresponding to the target social object in the first virtual social scene target avatar.
  • the processing module 502 is further configured to control the target avatar to perform social interaction in the first virtual social scene.
  • the processing module 502 is further configured to, when receiving the social conversation message sent by the target social object in the first virtual social scene, display the Social conversation messages.
  • the processing module 502 is further configured to control the target avatar to execute a set of object actions corresponding to the target interaction content when the social conversation message sent by the target social object contains the target interaction content.
  • the processing module 502 is further configured to control the target avatar to perform a group of object actions corresponding to the selected action option when the target avatar selects an action option in the first virtual social scene.
  • the first virtual scene contains virtual props, and the virtual props have interactive properties; the processing module 502 is also used to control the target avatar to trigger the virtual props, so as to trigger the interactive content matching the interactive properties of the virtual props .
  • the virtual props include the first type of virtual props, and the interactive attributes of the first type of virtual props are used to indicate the change information of the scene form of the first virtual social scene; the processing module 502 is specifically configured to: according to the first type The interactive attribute of the virtual prop updates the scene form of the first virtual social scene.
  • the virtual props include a second type of virtual props, and the interaction attributes of the second type of virtual props are used to indicate the object actions that one or more avatars in the first virtual social scene should perform; the processing module 502 specifically The method is used for: controlling one or more avatars in the first virtual social scene to perform corresponding object actions according to the interactive attributes of the second type of virtual props.
  • the virtual props include a third type of virtual props, and the interactive attribute of the third type of virtual props is used to indicate the feedback operation supported by the triggered virtual props; the processing module 502 is specifically configured to: The interactive property of props, which receives the feedback operation performed on the triggered virtual props of the third type.
  • the first virtual social scene contains a social activity area, or, the first virtual social scene contains a social activity entrance; the processing module 502 is specifically configured to: respond to the first virtual social scene containing In the social activity area, when the target avatar enters the social activity area, an activity page of social activities is displayed, so that the target avatar can join the social activity through the activity page; or, In response to the first virtual social scene containing the entrance of a social activity, when the target avatar triggers the entrance of the social activity, displaying the activity page of the social activity, allowing the target avatar to pass the activity Pages are added to the social activity.
  • the first real scene room is a theme customized room; the processing module 502 is specifically configured to: output options related to the customized theme; respond to the triggering of the options, and update the display form of the target avatar to be consistent with the selected theme Triggered options match the pattern.
  • the processing module 502 is further configured to: update the scene content of the first virtual social scene when the virtual viewing angle of the target avatar changes in the first real-scene room; wherein, the virtual viewing angle changes include: virtual Any one of changes in the focal length of the viewing angle, changes in the angle of the virtual viewing angle, and changes in the position of the target avatar.
  • the processing module 502 is further configured to: obtain the display attribute of the target avatar in response to receiving an interactive operation on the first virtual social scene; if the display attribute of the target avatar is a hidden attribute, then The target avatar is displayed in a transparent state in a real-scene room; if the display attribute of the target avatar is an explicit attribute, the target avatar is displayed in a non-transparent state in the first real-scene room.
  • the processing module 502 is further configured to: detect the network environment in response to an instruction to display the social service page; if the network environment is the first network environment, display the social service page; if the network environment is the second network environment, the snapshot page associated with the social service page will be displayed, and when the network environment changes to the first network environment, the snapshot page will be replaced with the social service page; if there is no network environment, a no-network prompt message will be displayed; among them, the snapshot page is During the historical display process of the social service page, it is obtained by recording the social service page according to a preset time interval.
  • FIG. 16 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • the computer device may include: a network interface 601, a memory 602, and a processor 603.
  • the network interface 601, the memory 602, and the processor 603 are connected through one or more communication buses, and the communication buses are used to implement connection and communication between these components.
  • the network interface 601 may include a standard wired interface and a wireless interface (such as a WIFI interface).
  • the memory 602 can include a volatile memory (volatile memory), such as a random-access memory (random-access memory, RAM); the memory 602 can also include a non-volatile memory (non-volatile memory), such as a flash memory (flash memory).
  • the processor 603 may be a central processing unit (central processing unit, CPU).
  • the processor 603 may further include a hardware chip.
  • the aforementioned hardware chip may be an application-specific integrated circuit (application-specific integrated circuit, ASIC), a programmable logic device (programmable logic device, PLD), and the like.
  • the above-mentioned PLD may be a field-programmable gate array (field-programmable gate array, FPGA), a general array logic (generic array logic, GAL) and the like.
  • the memory 602 is also used to store program instructions, and the processor 603 can also invoke the program instructions to implement related methods and steps in this application.
  • the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the methods provided in the foregoing embodiments are implemented.
  • the embodiment of the present application also provides a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the methods provided in the foregoing embodiments.
  • Units in the device in the embodiment of the present application may be combined, divided and deleted according to actual needs.
  • the program can be stored in a computer-readable storage medium.
  • the program When the program is executed , may include the processes of the embodiments of the above-mentioned methods.
  • the above-mentioned storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (Random Access Memory, RAM), etc.

Abstract

一种社交互动方法、装置、设备及存储介质、程序产品,该社交互动方法包括:显示社交服务页面(S201);响应于目标社交对象针对第一虚拟社交场景的互动操作,在第一实景房间内显示目标社交对象对应的目标虚拟形象(S202)。

Description

社交互动方法、装置、设备及存储介质、程序产品
本申请要求于2022年01月29日提交的申请号为202210112298.X、发明名称为“一种社交互动方法、装置、设备及存储介质、程序产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,尤其涉及一种社交互动方法、装置、设备及存储介质、程序产品。
背景技术
随着互联网技术的发展,越来越多的应用程序被开发并运行于计算机设备中,比如支付类应用程序、社交类应用程序、购物类应用程序。应用程序承载有各式各样的业务,如:应用程序具备的社交互动功能能够用于传递信息,进而实现线上交流。以社交类应用为例,社交应用中业务种类繁多,例如聊天业务、动态发布业务,通过社交类应用不仅能够及时方便地联系他人,还能够体验其他有趣的业务。
然而,社交应用的业务形态以扁平的用户界面的跳转为主,在多种不同的社交互动方面需要来回在多个界面之间进行切换,导致社交应用的人机交互效率较低。
发明内容
本申请实施例提供一种社交互动方法、装置、设备及存储介质、程序产品,可以提升社交应用的人机交互效率。
一方面,本申请实施例提供了一种社交互动方法,由终端执行,包括:
显示社交服务页面,所述社交服务页面中显示有第一实景房间,所述第一实景房间对应第一虚拟社交场景;
响应于接收到针对所述第一虚拟社交场景的互动操作,在所述第一虚拟社交场景内显示目标社交对象对应的目标虚拟形象,所述目标社交对象是所述终端主控的对象。
一方面,本申请实施例提供了一种社交互动装置,包括:
显示模块,用于显示社交服务页面,所述社交服务页面中显示有第一实景房间,所述第一实景房间对应第一虚拟社交场景;
处理模块,用于响应于接收到针对所述第一虚拟社交场景的互动操作,在所述第一虚拟社交场景内显示目标社交对象对应的目标虚拟形象,所述目标社交对象是显示所述社交服务页面的终端主控的对象。
相应地,本申请实施例提供了一种计算机设备,包括:存储器、处理器以及网络接口,上述处理器与上述存储器、上述网络接口相连,其中,上述网络接口用于提供网络通信功能,上述存储器用于存储程序代码,上述处理器用于调用程序代码,执行本申请实施例中的方法。
相应地,本申请实施例提供了一种计算机可读存储介质,包括:该计算机读存储介质中存储有计算机程序,该计算机程序被处理器执行时,实现本申请实施例中的方法。
相应地,本申请实施例提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中,计算机设备的处理器从计算机可读存储介质读取并执行该计算机指令,使得该计算机设备执行本申请实施例中的方法。
通过实施本申请实施例,可以在社交服务页面中显示第一实景房间,该第一实景房间中展示有第一虚拟社交场景,在响应于目标社交对象针对该第一虚拟社交场景的互动操作时, 可以将目标社交对象对应的目标虚拟形象显示于第一实景房间内。通过这种方式,目标社交对象可以通过目标虚拟形象进入第一实景房间中参与社交互动,使目标社交对象具有更加直观的视觉反馈,增强了目标社交对象的沉浸感,同时提升了社交互动的真实感和趣味性。
另外,本申请实施例中,在社交应用中提供实景房间以及提供实景房间对应的虚拟社交场景,从而在虚拟社交场景中能够接收互动操作并实现与互动操作对应的互动,将不同的互动集成在同一个实景房间中,避免了在多个社交界面之间的来回切换,提高了社交应用的人机交互效率。
附图说明
图1是本申请实施例提供的一种社交互动系统的架构图;
图2是本申请实施例提供的一种社交互动方法的流程示意图;
图3是本申请实施例提供的一种社交服务页面的示意图;
图4是本申请实施例提供的判断社交会话消息是否包含目标交互内容的流程示意图;
图5是本申请实施例提供的一种目标社交对象对虚拟形象发起互动操作的示意图;
图6是本申请实施例提供的一种判断虚拟道具的道具属性的流程示意图;
图7是本申请实施例提供的一种显示目标虚拟形象的流程示意图;
图8是本申请实施例提供的一种目标虚拟形象显示状态的示意图;
图9是本申请实施例提供的一种主题定制化房间的示意图;
图10是本申请实施例提供的一种社交活动的活动页的示意图;
图11是本申请实施例提供的另一种社交互动方法的流程示意图;
图12是本申请实施例提供的一种切换实景房间的流程示意图;
图13是本申请实施例提供的一种分层加载的示意图;
图14是本申请实施例提供的一种光影模式渲染的示意图;
图15是本申请实施例提供的一种社交互动装置的结构示意图;
图16是本申请实施例提供的一种计算机设备的结构示意图。
具体实施方式
首先,对本申请实施例可能涉及的词汇进行介绍。
(1)社交客户端:社交客户端可以是指与服务器相对应,为客户提供本地服务的社交APP(Application,应用程序),例如社交客户端可包括但不限于:即时通信APP、地图社交APP、内容交互APP、游戏社交APP、免安装APP(一种无需下载安装即可使用的应用,用户扫一扫或者搜一下即可打开并使用,例如小程序)等等;社交客户端也可以是指与服务器相对应,为客户提供本地服务的具备社交会话功能的网站,例如社交网站、论坛等等。
(2)AIO:英文全称为All In One,是指用户与好友进行聊天时的互动场景/窗口。在具备社交会话功能的APP中,例如一些即时通讯应用中用户参与到好友、群、公众账号等众多不同类型的会话。
(3)社交服务页面:是指基于社交功能所提供的服务页面,在社交服务页面中所呈现的内容可以是二维的,也可以是三维的。例如,某个社交应用的业务形态以扁平的UI(User Interface,亦称使用者界面)页面跳转为主,信息内容和主客社交对象的状态外显皆以UI承载。本申请实施例中社交服务页面中包括三维的实景房间以及二维或者三维的UI功能控件,当UI功能控件是二维控件时,例如消息输入栏,则接收在消息输入栏中的输入操作;当UI功能控件是三维控件时,例如三维实景房间中的道具控件,则接收对处于三维实景房间中的虚拟形象的控制操作,以控制虚拟形象触控该道具控件。
(4)实景房间:房间是指具备一定功能的虚拟逻辑空间,可以是呈现在页面中的平面空间,也可以是呈现在页面中的立体空间。房间还可以称为室、频道、社区、圈子、大厅等等,例如聊天房间也可以是称为聊天室,直播房间也可以称为直播频道。本申请实施例中的实景 房间是指承载应用相应业务功能且在立体探索空间中占据一定的立体空间的三维仿真建筑。实景房间具备一定功能的立体的虚拟逻辑空间。举例来说,至少一个社交对象的账号在社交应用中创建的虚拟逻辑空间为实景房间。本申请实施例中,除特别说明外,实景房间和房间等同。
(5)虚拟社交场景:是指基于互联网进行社交交流的场景,可包括但不限于:单独社交会话场景、群组社交会话场景、游戏竞赛场景、社交运营活动场景等等。其中,单独社交会话场景是指由两个社交对象进行社交会话的场景。群组社交会话场景是指由两个以上的社交对象共同参与社交会话的场景。游戏竞赛场景是指由两个以上的社交对象被划分至至少两个阵营,由该至少两个阵营进行游戏竞赛的场景。社交运营活动场景是指由服务器后台发起,或由任一个或多个社交对象发起的,邀请其他社交对象参与的运营活动的场景。
(6)虚拟形象:通常是指用于互联网社交互动的虚拟人物形象,该虚拟人物形象的表情、神态、装扮等都可以个性化设置。本实施例中,虚拟形象是指用于代表社交对象在虚拟社交场景中进行社交互动的虚拟人物形象。值得注意的是,上述举例中以虚拟形象是虚拟人物形象为例进行说明,在一些实施例中,上述虚拟形象还可以是虚拟动物形象、虚拟动画形象等,本实施例对此不加以限定。
(7)Meshbox:是指三维(3-dimension,3D)工程场景中放置模型的容器,模型可包含人物、建筑、物件等。
(8)3D场景资产:指3D工程中搭建场景需要加载渲染的物料。
下面将结合附图,对本申请实施例提供的社交互动系统的架构进行介绍。
请参见图1,图1是本申请实施例提供的一种社交互动系统的架构示意图。如图1所示,该架构图中包括计算机设备10和服务器12,计算机设备10和服务器12之间通过网络进行通信。计算机设备10可以和服务器12通过有线或无线的方式建立通信连接,并和服务器12进行数据交互。
计算机设备是指参与社交互动的社交对象所使用的设备,该计算机设备可包括但不限于智能手机、平板电脑、智能可穿戴设备、智能语音交互设备、智能家电、个人电脑、车载终端等等设备,本申请对此不作限制。对于计算机设备的数量,本申请不做限制。服务器12可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN、以及大数据和人工智能平台等基础云计算服务的云服务器,但并不局限于此。对于服务器12的数量,本申请不做限制。除特别说明外,本申请后续实施例中,计算机设备10均指目标社交对象所使用的设备,目标社交对象可以是指参与社交互动的任一个社交对象。
在本申请的一种可行的实施方式中,计算机设备10中安装并运行有社交客户端,社交对象可以基于各自的计算机设备中运行的社交客户端与其他社交对象进行社交交互,例如:社交对象A可以用计算机设备10中运行的社交客户端与社交对象B进行社交交互;再如:社交对象A、社交对象B和社交对象C均可使用各自计算机设备中运行的社交客户端进行社交交互。社交客户端可以提供社交服务页面,社交服务页面用于呈现一个或多个实景房间,每个实景房间均用于展示一个虚拟社交场景。目标社交对象可以灵活切换实景房间,从而浏览不同的虚拟社交场景,目标社交对象可以同时在多个实景房间内,计算机设备上可以同时显示多个实景房间对应的虚拟社交场景,对于一些显示屏较小的计算机设备,则可以通过前台,后台等方式显示多个实景房间对应的虚拟社交场景;进一步,目标社交对象还可以进入至任一个实景房间对应的虚拟社交场景中,与该虚拟社交场景中的其他社交对象进行社交交互。
在一个实施方式中,服务器12可以是社交客户端所对应的后台服务器,用于管理社交客户端并为其提供服务支持,该服务支持可包括但不限于:为参与社交互动的社交客户端推荐各种实景房间,从而达到推荐各个虚拟社交场景的目的;为参与社交互动的社交客户端转发 会话消息;为各个社交客户端同步虚拟形象在虚拟社交场景中的位置信息等等。
下面对图1所示系统所涉及的社交互动方法进行简单介绍:
(1)计算机设备10显示社交客户端中的社交服务页面,该社交服务页面中包含第一实景房间,该第一实景房间对应第一虚拟社交场景,也即,第一实景房间用于展示第一虚拟社交场景。第一虚拟社交场景包括一个或者多个虚拟道具,例如课桌虚拟道具、长椅虚拟道具、高楼虚拟道具、车流虚拟道具等,不同的虚拟社交场景,具有不同的虚拟道具布局,虚拟道具布局可以根据场景进行调整,同时不同的虚拟道具也可以绑定不同的交互功能。此处,第一实景房间可以是社交客户端中的任一个实景房间,第一虚拟社交场景可以是服务器12向目标社交对象所推荐的任一个虚拟社交场景,该第一虚拟社交场景可以包括但不限于:群组社交会话场景、游戏竞赛场景、社交运营活动场景等等。可以理解的是,本申请实施例中提及的“实景”用于形容虚拟房间模拟或者贴近实景的设计。
(2)如果目标社交对象对计算机设备10显示的社交服务页面中的第一虚拟社交场景感兴趣,则可以对该第一虚拟社交场景发起互动操作,例如:点击第一虚拟社交场景中的虚拟道具;或者,与第一虚拟社交场景中已有的虚拟形象进行聊天等等。通过该互动操作,目标社交对象即可以加入至第一虚拟社交场景中,进而,计算机设备10在第一虚拟社交场景中显示目标社交对象对应的目标虚拟形象;之后,目标虚拟形象则代表目标社交对象在第一虚拟社交场景中进行社交互动。
(3)计算机设备10显示的社交服务页面中的实景房间支持切换,例如,目标社交对象可以通过房间切换操作(例如在社交服务页面中执行上下滑动操作)切换社交服务页面中的实景房间。再如:可以根据社交服务页面中正在显示的实景房间的显示时长自动切换实景房间;又如:还可以当社交服务页面中正在显示的实景房间长时间无社交互动操作时自动切换实景房间或者关闭实景房间。通过实景房间支持切换的机制,使得社交服务页面可以动态地、及时地调整推荐内容,既给予目标社交对象视频流方式的内容互动推荐体验,同时又可以引导目标社交对象参与至更多的虚拟社交场景中进行社交互动,从而提升社交互动率。
通过实施本申请实施例,可以在社交服务页面中显示第一实景房间,该第一实景房间中展示有第一虚拟社交场景,在响应于目标社交对象针对该第一虚拟社交场景的互动操作时,可以将目标社交对象对应的目标虚拟形象显示于第一实景房间内。通过这种方式,目标社交对象可以通过目标虚拟形象进入第一实景房间中参与社交互动,使目标社交对象具有更加直观的视觉反馈,增强了目标社交对象的沉浸感,同时提升了社交互动的真实感和趣味性。另外,社交服务页面中的第一实景房间还支持被灵活切换,这样的灵活切换方式可以使得目标社交对象能够便捷地浏览到更多的虚拟社交场景,给予目标社交对象视频流方式的内容推荐体验,同时又可以引导目标社交对象参与至更多的虚拟社交场景中进行社交互动,从而提升社交互动率。
下面结合附图,对本申请实施例提出的社交交互方法进行阐述。除特别说明外,本申请后续实施例中提及的社交交互方法可以是图1所示系统中的计算机设备10来执行,该计算机设备10是目标社交对象所使用的设备,且该计算机设备10中运行有社交客户端。
请参见图2,图2是本申请实施例提供的一种社交互动方法的流程示意图,该方法可以由计算机设备执行。该方法可以包括但不限于如下步骤:
S201:显示社交服务页面。
社交服务页面中显示有第一实景房间,第一实景房间对应第一虚拟社交场景,该第一虚拟社交场景包括但不限于:群组社交会话场景、游戏竞赛场景、社交运营活动场景等。第一虚拟社交场景中可以包含进行社交互动的一个或多个虚拟形象,其中,虚拟形象分别用于代表一个社交对象在第一虚拟社交场景中进行社交互动。可以理解的是,第一虚拟社交场景中是否包含虚拟形象以及所包含的虚拟形象的数量视场景需要或场景实际情况而定,例如:若第一虚拟社交场景为群组社交会话场景,则该第一虚拟社交场景可以包含该群组中部分或全 部的社交对象分别对应的虚拟形象,也可以仅包含该群组中在线社交对象分别对应的虚拟形象,还可以仅包含该群组中在预设时间段内发送了社交会话的社交对象分别对应的虚拟形象。
每个社交对象可以创建一个实景房间,该实景房间为社交对象创建的虚拟3D房间,其他社交对象或者好友对象可以进入,当多个社交对象进入后,该实景房间可以认为是一个带有3D实景特性的社交群组,在该社交群组中,社交对象之间可以通过虚拟形象实现一起聊天、一起看视频、一起听音乐、一起玩游戏等各类事务。
实景房间的创建者,也可以对该房间进行加密,其他社交对象在输入了正确的密码之后可以进入该实景房间,创建者还可以设置实景房间在某个时间段不开放,在该不开放的时间段,实景房间成为一个私密空间,其他社交对象在执行进入房间的部分或者全部操作之后(这些操作可以是创建者预定义的),创建者会收到新社交对象的房间加入提醒请求,该房间加入提醒请求包括该新加入的社交对象的标识等信息,创建者可以根据需要选择接收用户进入房间。当然,在想要加入该不开放的实景房间的目标社交对象完成了部分或者全部操作之后,也可以自动响应该目标社交对象的加入操作,接收该目标社交对象进入当前处于私密空间的实景房间中。
其中,第一实景房间与虚拟形象均可以是3D立体展示的。请参见图3,是本申请实施例提供的一种社交服务页面的示意图,如图3所示,社交服务页面300中显示第一实景房间,该第一实景房间展示有第一虚拟社交场景,在第一虚拟社交场景中有进行社交互动的三个虚拟形象(分别为虚拟形象301、虚拟形象302、虚拟形象303),该三个虚拟形象分别代表三个社交对象。
在本申请实施例中,每个实景房间可以以信息流的形式呈现,目标社交对象可通过针对信息流的上下滑动(操作不限于此,例如在显示屏幕较大时,也可以是左右滑动)屏幕的方式浏览各个实景房间。当目标社交对象通过滑动屏幕浏览到第一实景房间时,社交服务页面会显示第一实景房间中的3D场景、已有的虚拟形象及已有的虚拟形象正在或者已经执行的动作等,例如,在社交服务页面中的第一实景房间呈现的3D场景(第一虚拟社交场景)中包括:两个虚拟形象在一起看电视,同时在进行交谈,并且其中一个虚拟形象还可以是同时在执行打扫卫生的动作。
在第一实景房间的3D场景下,目标社交对象在浏览过程中,不仅可以观看图像、视频,还可以播放音频,使得目标社交对象可以听到声音,例如,上述观看电视的第一虚拟社交场景,两个虚拟形象可以代表社交对象观看电视,该电视可以以虚拟电视道具的方式在第一实景房间中呈现,而目标社交对象也能够听到虚拟电视道具中播放的音频内容,在听到感兴趣的内容时,目标社交对象可以通过触摸虚拟电视道具停留观看播放的视屏内容。此时,如目标社交对象想调整视角、或者想放大虚拟电视道具中播放的视屏内容时,可以通过触摸第一实景房间中的虚拟电视道具或者其他虚拟道具,进入第一实景房间,或者触摸了房间中的某个虚拟形象进入第一实景房间,或者是在该第一实景房间的消息输入栏中输入语音或文字进入第一实景房间,又或者是通过调节社交服务页面上的虚拟控制控件,来调整目标社交对象对应的目标虚拟形象在第一实景房间中的位置和朝向,进而进入第一实景房间。
也就是说,目标社交对象在浏览第一实景房间时,目标社交对象对应的目标虚拟形象不会显示在第一实景房间内,一旦目标社交对象与该第一实景房间或者第一实景房间中的虚拟形象或虚拟道具发起了互动操作,就会进入该第一实景房间,即目标社交对象对应的目标虚拟形象会显示在该第一实景房间内。
或者在另一些实施例中,在目标社交对象浏览第一实景房间的3D场景的过程中,仅能够体验第一实景房间中的部分互动效果,如:能够观看第一实景房间的3D场景的全部/部分图像,观看第一实景房间的3D场景中的全部/部分视频,在目标社交对象的目标虚拟形象进入第一实景房间后能够体验第一实景房间中的全部互动效果。
S202:响应于接收到针对第一虚拟社交场景的互动操作,在第一虚拟社交场景内显示目标社交对象对应的目标虚拟形象。
目标社交对象是终端主控的对象,其中,目标社交对象是显示社交服务页面的终端主控的对象。
当目标社交对象针对第一虚拟社交场景主动触发互动操作时,表示目标社交对象对该第一虚拟社交场景感兴趣,想要加入至该第一虚拟社交场景中进行社交互动,则在第一实景房间内显示目标社交对象对应的目标虚拟形象。
在一些实施例中,若目标社交对象未对第一虚拟社交场景触发互动操作,则目标社交对象作为一个旁观者,并查看多方输入的会话消息,并可查看如图3所示的社交会话流310,同时,该目标社交对象也可以观看到在页面区域3051中显示的一个或者多个社交对象的虚拟形象。
在一种实施方式中,目标社交对象针对第一虚拟社交场景的互动操作可以是控制虚拟控制控件的操作。在此实施方式中,第一实景房间中显示有虚拟控制控件,响应于接收到对第一实景房间中的虚拟控制控件的触发操作,在第一虚拟社交场景内显示目标社交对象对应的目标虚拟形象。例如图3所示,第一实景房间中显示有虚拟控制控件3011。该虚拟控制控件3011可以显示于第一实景房间中的任意位置,也可以显示于社交服务页面的任意位置,该虚拟控制控件3011的展示形式可以是多种,在此不做限制。通过控制该虚拟控制控件3011控制虚拟形象在房间内进行位移运动,可以理解的是,在房间内进行位移运动是指虚拟形象在页面或者在第一虚拟社交场景中的虚拟移动,可选地,控制虚拟形象移动的方向、移动的速度、移动到第一虚拟社交场景中的指定位置等。当该虚拟控制控件3011被目标社交对象触发时,计算机设备则接收到目标社交对象针对第一虚拟社交场景的互动操作,从而可以在第一实景房间内显示目标社交对象对应的目标虚拟形象。针对该虚拟控制控件的触发操作例如可以是针对该虚拟控制控件的长按操作,若虚拟控制控件上接收到的长按操作的时间越久,则目标虚拟形象进入第一实景房间后的进行位移运动的距离就越长。
当目标虚拟形象显示在第一虚拟社交场景之后,目标社交对象可以通过触发该虚拟控制控件控制虚拟形象移动到指定位置,可选地,目标虚拟形象在第一实景房间中的移动方式可以默认展示为步行或跑步。可以理解的是,本申请并不对目标虚拟形象在第一实景房间中的移动方式进行限定。可以理解的是,针对控制虚拟形象进行移动的处理,也可以通过其他操作来实现,比如一些手势操作、语音操作等等。在一个实施例中,也可以通过对摇杆控件在多方位上的滑动操作,来达到控制虚拟形象移动的目的,例如在摇杆控件上转动,控制虚拟形象的方位和朝向,通过左右上下滑动,控制对应的虚拟形象在左右上下方向上持续移动,当松开对该摇杆控件的触摸时,则停止移动。
在另一种实施方式中,目标社交对象针对第一虚拟社交场景的互动操作可以是针对第一虚拟社交场景发送社交会话消息的操作。当目标社交对象针对第一虚拟社交场景发送社交会话消息时,计算机设备则接收到目标社交对象针对第一虚拟社交场景的互动操作,从而在第一实景房间内显示目标社交对象对应的目标虚拟形象。
需要说明的是,社交服务页面中可以显示第一虚拟社交场景的社交会话流,该社交会话流可以包含多个虚拟形象进行社交互动时产生的一个或多个社交会话消息,可选地,社交会话流中包括在第一虚拟社交场景中的社交对象产生的社交会话消息。该社交会话消息可以是文本、表情、语音、视频、图像、链接等等中的任意一种或多种的组合。相应地,社交服务页面可以包含消息输入栏,该消息输入栏用于输入社交会话消息,当社交会话消息被发送时,在社交会话流中可以显示社交会话消息,可选地,发送社交会话消息的社交对象对应的虚拟形象的周围也可以显示该社交会话消息,即当接收到目标社交对象在第一虚拟社交场景中发送的社交会话消息时,在第一实景房间中的目标虚拟形象的周围显示社交会话消息,目标虚拟形象的周围例如可以是指显示社交会话消息的区域与目标虚拟形象所在区域的距离在预设的距离范围内,显示社交会话消息的区域与目标虚拟形象所在区域的距离可以是指该两个区域的中心点之间的距离,也可以是指该两个区域之间的最接近的边缘之间的距离等等。
如图3所示,社交服务页面300中包含社交会话流310及消息输入栏320,目标社交对 象可以在消息输入栏320中输入社交会话消息,当输入的社交会话消息被发送时,第一实景房间内则会显示目标社交对象对应的目标虚拟形象。另外,该社交会话消息将被显示于社交会话流310中。在一种实施方式中,目标社交对象所发送的社交会话消息还可以显示于目标社交对象对应的目标虚拟形象周侧预设位置,如图3所示,社交会话流310中显示的目标社交对象(如小李)发送的社交会话消息(社交消息为“我来了”三个字),在第一虚拟社交场景中被同步显示在小李对应的目标虚拟形象3012的周围(在目标虚拟形象3012上方显示有内容为“我来了”的文本框)。
在一个实施例中,当目标社交对象输入社交会话消息时,社交服务页面中还可以显示键盘操作面板,目标社交对象可以通过键盘操作面板编辑文本、表情、语音等社交会话消息。可选地,该键盘操作面板可以展开显示,也可以折叠显示,当键盘操作面板展开显示时,可以方便目标社交对象编辑社交会话消息,当键盘操作面板折叠显示时,可以显示的第一虚拟社交场景的区域更大,方便目标社交对象在第一虚拟社交场景中进行社交互动。如图3所示,当目标社交对象在输入社交会话消息时,社交服务页面30中还可以显示图3中的(3)所示的键盘操作面板305,目标社交对象可以通过键盘操作面板305编辑文本、表情、语音等社交会话消息。可选地,该键盘操作面板305可以展开显示,也可以折叠显示,图3中的(4)示出了键盘操作面板折叠显示的示意图。在一个实施例中,社交服务页面中还可以包含消息导航标识,当该消息导航标识被选择时,可以显示会话消息列表,该会话消息列表中展示了其他社交对象向目标社交对象发送的社交会话消息。如图3所示,社交服务页面300中还包含了消息导航标识330,当该消息导航标识330被选择时,可以进入图3中的(2)所示的会话消息列表304。
若目标社交对象发送的社交会话消息包含目标交互内容,可以控制目标虚拟形象执行目标交互内容对应的一组对象动作,该目标交互内容例如可以是一预设交互内容,也可以是指一些指定类型的交互内容。在一个实施例中,预设交互内容是指预设的指令标记,或者是预设语义库中所包含的内容,具体的,指令标记是目标社交对象设置的用于简化指令内容的符号或是命令词,例如目标社交对象可以设置指令标记“%”表示指令“原地跳跃”,目标社交对象若发送社交会话消息“%”,则可以识别指令“原地跳跃”,进而控制目标虚拟形象执行“原地跳跃”对象动作;预设语义库中包含了与语义对应的虚拟形象的动态或表情,若目标社交对象发送的社交会话消息命中了语义库,则可以控制目标虚拟形象执行相应的对象动作,例如目标社交对象若发送社交会话消息“亲亲”,“亲亲”是预设语义库中包含的内容,则可以控制目标虚拟形象做出飞吻动作。进一步地,若当前目标社交对象发送的社交会话消息与其他社交对象发送的社交会话消息命中预设语义库中相同的内容,则发送社交会话消息的多个社交对象对应的虚拟形象可以共同触发相应的对象动作,例如四个社交对象都发送“加油”,将可以控制四个社交对象对应的虚拟形象将共同执行加油动态对应的对象动作。通过这种方式,使目标虚拟形象在第一实景房间中的社交互动方式更加丰富,有利于提升社交对象参与互动的积极性。
在又一种实施方式中,第一虚拟社交场景中已包含进行社交互动的M个虚拟形象;那么,接收到针对第一虚拟社交场景的互动操作可以是针对第一虚拟社交场景中的M个虚拟形象中的参考虚拟形象发起的互动操作,该参考虚拟形象可以是M个虚拟形象中的任一个虚拟形象,也可以是某个特殊的虚拟形象,例如该参考虚拟形象可以是创建该第一实景房间的社交对象所对应的虚拟形象,在产生了对参考虚拟形象发起的互动操作之后,响应于目标社交对象对M个虚拟形象中的参考虚拟形象发起的互动操作,在第一实景房间内显示目标社交对象对应的目标虚拟形象。当目标社交对象针对第一虚拟社交场景中任一个虚拟形象发起互动操作时,计算机设备接收到目标社交对象针对第一虚拟社交场景的互动操作,从而可以在第一实景房间内显示目标社交对象对应的目标虚拟形象,目标虚拟形象可以是目标社交对象自定义的虚拟形象。可选地,当目标社交对象点击第一虚拟社交场景中的M个虚拟形象中的任一个虚拟形象时,可以触发显示互动面板,该互动面板中可以包含多个互动选项,例如选择动作或是 查看资料,当目标社交对象选择其中任意一个互动选项时,便可以在第一实景房间内显示目标社交对象对应的目标虚拟形象。请参见图4,是本申请实施例提供的一种目标社交对象对虚拟形象发起互动操作的示意图,如图4所示,在社交服务页面中显示了虚拟形象306对应的互动面板307,在该互动面板307中包含查看资料、打招呼以及送礼三个互动选项,当目标社交对象选择互动选项“打招呼”时,虚拟形象306执行了“打招呼”对应的对象动作,由于目标社交对象对虚拟形象306发起了互动操作,在第一实景房间内显示了目标社交对象对应的虚拟形象308。
在又一种实施方式中,假设第一虚拟社交场景中包含虚拟道具,那么,目标社交对象针对第一虚拟社交场景的互动操作可以是针对虚拟道具的触发操作,即当目标社交对象对虚拟道具执行触发操作时,计算机设备接收到目标社交对象针对第一虚拟社交场景的互动操作,从而在第一实景房间内显示目标社交对象对应的目标虚拟形象;其中,目标社交对象对虚拟道具的触发操作包括:点击触发操作、语音触发操作、手势触发操作中的任一种。例如:第一虚拟社交场景中包含电视机这个虚拟道具,目标社交对象点击该电视机,就发起针对第一虚拟社交场景的互动操作。
在一个实施例中,第一虚拟社交场景中的虚拟道具可按照互动属性进行分类,此处,虚拟道具是指在实景房间中具有特定交互能力的虚拟物品,每种虚拟道具都具备各自的互动属性。具体实现中,第一虚拟社交场景中的虚拟道具可以是第一类虚拟道具,第一类虚拟道具的互动属性可以用于指示第一虚拟社交场景的场景形态变化信息,若第一类虚拟道具被触发,则第一虚拟社交场景的场景形态将按照第一类虚拟道具的互动属性更新。场景形态变化信息包括但不限于场景中视频播放比例的变化、音频音量的变化、灯光显示的变化等等,例如第一类虚拟道具可以是第一虚拟社交场景中的虚拟电视机道具,若虚拟电视机道具被触发,则可以在第一虚拟社交场景中触发视频全屏播放,又如第一类虚拟道具也可以是第一虚拟社交场景中的虚拟音响道具,若虚拟音响道具被触发,则可以在第一虚拟社交场景中调整音频的音量,再如第一类虚拟道具还可以是第一虚拟社交场景中虚拟灯具道具,若虚拟灯具道具被触发,则可以在第一虚拟社交场景中开启或关闭光源。此外,第一虚拟社交场景中的虚拟道具也可以是第二类虚拟道具,第二类虚拟道具的互动属性可以用于指示第一虚拟社交场景中的一个或多个虚拟形象应当执行的对象动作,若第二类虚拟道具被触发,则可以按照第二类虚拟道具的互动属性,控制第一虚拟社交场景中的一个或多个虚拟形象执行相应的对象动作。例如,第二类虚拟道具可以是第一虚拟社交场景中的虚拟跷跷板道具,该虚拟跷跷板道具需要两个虚拟形象共同触发,在第一虚拟社交场景中指定任意两个虚拟形象触发该虚拟跷跷板道具后,便可以控制被指定的两个虚拟形象执行玩耍虚拟跷跷板的对象动作。可选地,第一虚拟社交场景中的虚拟道具还可以是第三类虚拟道具,第三类虚拟道具的互动属性用于指示针对被触发的虚拟道具支持的反馈操作,该第三类虚拟道具可以是指未绑定交互功能或是特定对象动作的道具,触发该类虚拟道具,可以查看该类虚拟道具的介绍详情,也可以收藏或购买该类虚拟道具,或是对该类虚拟道具进行点赞和评论,相应地,该类道具的拥有者可以接收到对应操作的反馈消息,该类道具的拥有者例如可以是第一实景房间的创建者。
在一个实施例中,响应于接收到针对第一虚拟社交场景的互动操作,还可以先获取目标虚拟形象的显示属性,目标虚拟形象的显示属性可以是隐藏属性,也可以是外显属性,若目标虚拟形象的显示属性为隐藏属性,则可以在第一实景房间中将目标虚拟形象显示为透明状态,若目标虚拟形象的显示属性为外显属性,则可以在第一实景房间内将目标虚拟形象显示为非透明状态。目标虚拟形象的显示属性可以由目标社交对象进行设置,针对显示属性的设置参数可以配置有显示时长,若目标虚拟形象当前被设置的显示属性对应的显示时长达到预设阈值时,可以切换为另一种显示属性,例如,若当前设置的目标虚拟形象的显示属性为隐藏属性,对应的显示时长为1分钟,则在1分钟后可以将目标虚拟形象的显示属性由隐藏属性切换为外显属性,即在第一实景房间内将目标虚拟形象由透明显示状态切换为非透明显示状态。在一种可能实现的方式中,目标虚拟形象的显示属性可以与虚拟道具绑定,例如,将 隐藏属性与虚拟隐藏道具绑定,若目标虚拟形象携带虚拟隐藏道具,即对应显示属性为隐藏属性,则在第一实景房间内将目标虚拟形象显示为透明状态,若目标虚拟形象未携带虚拟隐藏道具,则在第一实景房间内默认将目标虚拟形象显示为外显状态。
通过实施本申请实施例,可以在社交服务页面中显示第一实景房间,该第一实景房间中展示有第一虚拟社交场景,在响应于目标社交对象针对该第一虚拟社交场景的互动操作时,可以将目标社交对象对应的目标虚拟形象显示于第一实景房间内。通过这种方式,目标社交对象可以通过目标虚拟形象进入第一实景房间中参与社交互动,使目标社交对象具有更加直观的视觉反馈,增强了目标社交对象的沉浸感,同时提升了社交互动的真实感和趣味性。
本实施例提供的方法,通过在社交服务页面中显示社交会话流,对第一虚拟社交场景中的社交对象产生的社交会话消息进行展示,无需触发其他查看操作对社交会话流进行查看,提高了人机交互效率。
本实施例提供的方法,通过虚拟控制控件触发目标虚拟形象进入第一虚拟社交场景,在目标虚拟形象与第一实景房间发生互动时直接控制目标虚拟形象进入第一虚拟社交场景,提高了社交互动效率。
本实施例提供的方法,当目标社交对象发起与第一实景房间中的社交对象之间的互动操作,即控制目标虚拟形象进入第一虚拟社交场景,避免需要通过额外的操作控制目标虚拟形象进入第一虚拟社交场景,提高了社交互动效率。
本实施例提供的方法,当接收到针对第一虚拟社交场景中的虚拟道具的触发操作,控制目标虚拟形象进入第一虚拟社交场景,避免需要通过额外的操作控制目标虚拟形象进入第一虚拟社交场景,提高了社交互动效率。
本实施例提供的方法,当目标社交对象在第一虚拟社交场景中发送社交会话消息,则控制目标虚拟形象进入第一虚拟社交场景,一方面提高了目标社交对象与第一实景房间之间的互动多样性,另一方面提高了社交互动的人机交互效率。
本实施例提供的方法,在目标虚拟形象通过发送社交会话消息的方式进入第一虚拟社交场景时,在目标虚拟形象的周侧显示社交会话信息,使第一虚拟社交场景中的其他虚拟形象能够快速获知目标社交对象所发送的社交消息,提高了信息传达的有效性和效率。
在一个实施例中,当目标社交对象针对第一虚拟社交场景发送社交会话消息时,判断社交会话消息是否包含目标交互内容(该目标交互内容可以是一预设交互内容)可以通过图5中所示的步骤实现:
S31:输入社交会话消息。
目标社交对象可以在社交服务页面输入社交会话消息,输入的社交会话消息可以是文本消息,也可以是语音消息,其中语音消息可以是实时语音,也可以是录音。
S32:社交会话消息转换为字符语义。
将目标社交对象输入的社交会话消息转换为字符语义,这里,也可以同时转换社交会话流中出现的其他社交会话消息。当用户输入实时语音、录音或文字消息后,对于这些社交会话消息将统一转换为字符语义。后续通过对字符语义的判断,来实现对社交会话消息的判断,比如通过判断字符语义中是否包含指令标记,来判断社交会话消息是否包含指令标记,通过判断字符语义是否命中语义库,来判断社交会话消息是否命中语义库,通过判断字符语义是否为多对象会话,来判断社交会话消息是否为多对象会话。当然,也可以不用转换得到对应的字符语义,直接进行社交会话消息的判断,例如使用语音识别来判断语音类型的社交会话消息中是否存在指令标记、是否命中语义库,是否为多对象会话等等。
S33:判断社交会话消息是否包含指令标记。
指令标记是目标社交对象设置的用于简化指令内容的符号或是命令词,目标社交对象输入的社交会话消息内容中可以包含预设的指令标记。当S33的判断结果为是时,执行下述的S34,当S33的判断结果为否时,执行下述的S37。
S34:判断道具库中是否绑定对应操作。
道具库中可以包含指令标记与对象动作之间的映射关系,若目标社交对象输入的社交会话消息中包含指令标记,则可以进一步判断道具库中是否绑定了与该指令标记对应的对象操作,并在S34的判断结果为是时,执行下述的S35,在S34的判断结果为否时执行下述的S36。
S35:虚拟形象执行对应的对象动作。
若社交会话消息中包含的指令标记在道具库中绑定了对应的对象动作,则目标虚拟形象可以执行对应的对象动作。
S36:返回未识别行为,提示重新输入。
若社交会话消息中包含的指令标记在道具库中没有查找到对应的对象动作,则可以返回提示信息,用于提示目标社交对象未识别到该指令标记对应的对象动作,以引导目标社交对象重新输入。
S37:判断社交会话消息是否命中语义库。
若转换得到的字符语义中无法识别到指令标记,则进行社交会话消息处理,继而判断其字符语义是否命中社交行为语义库,即语义库中有与语义对应的虚拟人物动态或表情。语义库中可以包含语义对应的对象动作,对象动作例如可以是虚拟形象动态或表情,若社交会话消息中未包含指令标记,则可以判断社交会话消息是否命中语义库。可选的,判断社交会话消息是否命中语义库的步骤,与判断社交会话消息是否包含指令标记的步骤可以交换顺序执行,即也可以先判断社交会话消息是否命中语义库,再判断社交会话消息是否包含指令标记。在S37的判断结果为是时,执行下述的S38,在S37的判断结果为否时,执行下述的S42。
S38:判断社交会话消息是否为多对象会话消息。
若社交会话消息命中语义库,则可以判断该会话消息是否为多对象会话消息。多对象会话是指多个社交对象交流产生的相关联的会话消息,例如多个社交对象输入相同内容的社交会话消息可以判断为多对象会话,多个社交对象针对同一话题内容的社交会话消息进行回复产生的会话消息也可以判断为多对象会话。在S38的判断结果为是时,执行下述的S39,在S38的判断结果为否时,执行下述的S41。
S39:判断社交会话消息是否达到多对象动作的数量阈值。
若判断社交会话消息为多对象会话消息,则可以继续判断发送多对象会话消息的社交对象的数量是否已达到阈值。在S39的判断结果为是时,执行下述的S40,在S39的判断结果为否时,则按照正常的对象动作进行处理,例如在S39判断为否时,执行S41,或者是不执行对象动作。
S40:参与会话的虚拟形象可共同触发集体性的对象动作。
若参与多对象会话的社交对象达到数量阈值,则参与多对象会话的社交对象对应的虚拟形象可以共同触发集体性的对象动作。例如,超过四个社交对象发送“哈哈”,则参与多对象会话的四个社交对象对应的虚拟形象可以共同触发语义库中“哈哈”对应的对象动作。
S41:虚拟形象执行对应语义库中的对象动作。
若社交会话消息命中语义库,且社交会话消息非多对象会话消息,则可以直接控制目标虚拟形象执行语义中对应的对象动作。
S42:作为普通社交会话消息沉淀社交会话流。
若社交会话消息未命中语义库,则将社交会话消息显示于社交会话流中,目标虚拟形象将不再执行对象动作。
通过这种方式,可以将社交会话消息的内容通过目标虚拟形象的对象动作进行展示,既丰富了社交会话消息的呈现结果,又丰富了虚拟形象的显示效果。
本实施例提供的方法,在目标社交对象发送的会话消息中命中语义库中的关键词时,控制目标虚拟形象执行对应的对象动作,从而更直观的表达了目标社交对象所发送的会话消息的语义,提高了信息传递效率和有效性。
本实施例提供的方法,在多个社交对象发送的会话消息中命中语义库中的关键词时,控 制多个目标虚拟形象触发集体性对象动作,提高了社交多样性。
在一个实施例中,当第一虚拟社交场景中的虚拟道具被触发时,虚拟道具的道具属性及道具属性对应触发的互动内容判断可以通过图6所示的步骤实现:
S51:触发虚拟道具。
可选地,目标社交对象通过点击虚拟道具的方式触发虚拟道具。
S52:判断道具属性。
由于虚拟道具可以具有不同的道具属性,当虚拟道具被触发时,计算机设备需要判断虚拟道具的属性,从而触发虚拟道具对应的操作入口和链路。
S53:若被触发的虚拟道具属于第一类虚拟道具,触发单一虚拟道具对应行为链路。
可选地,第一类虚拟道具是场景互动道具,第一类虚拟道具触发后,可以改变虚拟社交场景的场景形态变化信息,第一类虚拟道具可以绑定特定交互行为,点击第一类虚拟道具,可以触发其对应的行为链路,以改变虚拟社交场景的场景形态变化信息。
S54:若被触发的虚拟道具属于第二类虚拟道具,判断是否为多对象互动道具。
可选地,第二类虚拟道具是指需要虚拟形象介入才能触发的道具,可以分为单对象互动道具和多对象互动道具。通过触发第二类虚拟道具,可以控制虚拟社交场景中的一个或多个虚拟形象执行对应的对象动作。在S54判断结果为属于多对象互动道具时执行S55,在S54的判断结果为不属于多对象互动道具时,则为单对象互动道具,执行下述的S56。
S55:指定多个虚拟形象触发道具。
若虚拟道具为第二类虚拟道具中的多对象互动道具,则可以指定多个虚拟形象进行触发。
S56:指定单一虚拟形象触发道具。
若虚拟道具为第二类虚拟道具中的单对象互动道具,则可以指定虚拟社交场景中的任意一个虚拟形象触发道具。
S57:若为第三类虚拟道具,显示第三类虚拟道具支持的反馈操作选项。
第三类虚拟道具可以是指未绑定特定交互行为的道具,触发第三类虚拟道具,显示第三类虚拟道具支持的反馈操作选项,反馈操作选项例如可以是查看简介(或详情)、点赞、评论等操作选项。第三类虚拟道具拥有者可收到对应操作消息反馈,该第三类虚拟道具的物体模型及渲染效果可以改变也可以不受任何影响。
S58:触发第三类虚拟道具支持的反馈操作,道具拥有者接收对应操作反馈消息。
例如对第三类虚拟道具进行点赞或评论,道具拥有者将收到对应操作的反馈消息。
通过这种方式,将原本为页面控件的操作入口做成了虚拟道具的形态,进一步增强了虚拟社交场景的场景可互动性,增加了目标社交对象的体验真实感。
本实施例提供的方法,针对虚拟道具属性的不同,在虚拟道具被触发时,表现与互动属性对应的社交互动,提高了社交互动的多样性。
本实施例提供的方法,针对第一类虚拟道具,在第一类虚拟道具被触发时,表现该第一虚拟社交场景的场景形态变化,增加了目标虚拟角色与虚拟社交场景之间的互动性。
本实施例提供的方法,针对第二类虚拟道具,在第二类虚拟道具被触发时,表现一个或者多个虚拟形象执行对象动作的过程,提高了虚拟对象与虚拟社交场景之间的互动多样性。
本实施例提供的方法,针对第三类虚拟道具,在第三类虚拟道具被触发时,表现目标虚拟形象对被触发的第三类虚拟道具的反馈动作,提高了虚拟社交场景中虚拟形象与虚拟道具之间的互动多样性。
在一个实施例中,在第一实景房间内显示目标社交对象对应的目标虚拟形象可以通过图7所示的步骤实现:
S71:启动第一实景房间加载。
S72:默认初始生成点生成目标虚拟形象对应的模型容器。
计算机设备可以在第一实景房间中的默认初始点生成目标虚拟形象对应的模型容器,例如可以是将Meshbox作为虚拟形象的容器,Meshbox是3D工程场景中放置模型的容器,此时目标虚拟形象可以不渲染外显,仅保留容器位置,即目标虚拟形象透明显示。当有多个虚拟形象进第一实景房间时,均可以在同一默认初始生成点生成虚拟形象对应的模型容器。
S73:检测判断目标社交对象是否有以下任意一种触发操作:
(1)是否触发虚拟控制控件。第一实景房间中可以显示有虚拟控制控件,该虚拟控制控件可以用于控制虚拟形象在房间内进行位移运动,若计算机设备检测到目标社交对象触发该虚拟控制控件,则可以视为目标社交对象主动触发目标虚拟形象在第一实景房间中的显示。
(2)是否对任一个虚拟形象发起互动操作。目标社交对象对任一个虚拟形象发起互动操作的方式可以是点击对应的虚拟形象,也可以是对任一个虚拟形象触发对象动作,例如选择虚拟形像执行打招呼、送礼物操作,若计算机设备检测到目标社交对象对任一个虚拟形象发起互动操作,则可以视为目标社交对象主动触发目标虚拟形象在第一实景房间中的显示。
(3)是否发送社交会话消息。社交服务页面中可以包含消息输入栏,目标社交对象在消息输入栏编辑社交会话消息后发送,若计算机设备检测到目标社交对象发送社交会话消息,则可以视为目标社交对象主动触发目标虚拟形象在第一实景房间中的显示。
(4)是否触发虚拟道具。第一虚拟社交场景中可以包含多种虚拟道具,目标社交对象可以通过点击虚拟道具的方式触发虚拟道具,若计算机设备检测到目标社交对象触发虚拟道具,则可以视为目标社交对象主动触发目标虚拟形象在第一实景房间中的显示。
若触发了任意一种操作,则执行下述的S74,否则,执行下述的S77。
S74:判断社交对象是否使用虚拟隐藏道具。
若检测到目标社交对象有上述任意一种触发操作,则进一步判断社交对象是否使用虚拟隐藏道具,虚拟隐藏道具可以使目标社交对象对应的目标虚拟形象持续透明显示,以满足目标社交对象不想暴露其虚拟形象的需求。在S74的判断结果为是时,执行下述的S75,否则,执行下述的S76。
S75:持续透明显示直至虚拟隐藏道具限时结束。
虚拟隐藏道具可以设置有对应的时间参数,当达到预设时间时,则可以视为虚拟隐藏道具失效。
S76:目标虚拟形象对应的模型容器内将目标虚拟形象启动渲染外显。
若检测到目标社交对象有上述任意一种触发操作,且未使用虚拟隐藏道具或者虚拟隐藏道具失效,则可以在目标虚拟形象对应的模型容器内将目标虚拟形象启动渲染外显,即将目标虚拟形象显示为非透明状态。
S77:持续透明显示直至触发非透明显示条件。
若未检测到目标社交对象有上述任意一种触发操作,则将目标虚拟形象持续透明显示,如此,可以保证目标社交对象的对象信息得到有效保护。
请参见图8,是本申请实施例提供的一种目标虚拟形象透明显示和非透明显示的示意图,其中图8中的(1)为目标虚拟形象透明显示的示意图,图8中的(2)为目标虚拟形象非透明显示的示意图。
在一个实施例中,目标虚拟形象在第一实景房间中可以是基于默认虚拟视角显示的,当目标虚拟形象在第一实景房间中的虚拟视角发生变化时,可以根据虚拟视角的变化更新第一实景房间中的虚拟社交场景的场景内容。其中,虚拟视角发生变化包括以下任意一种情况:
(1)虚拟视角的焦距发生变化;此处,虚拟视角的焦距发生变化可以是由于目标虚拟形象对应的目标社交对象执行焦距调整操作所产生的变化,例如社目标交对象可以在显示第一虚拟社交场景的社交服务页面中执行手势缩放操作,调整虚拟视角的焦距,跟随于缩放操作,页面内容会被放大显示或是缩小显示,针对虚拟视角的焦距调整操作可以是手势缩放操作,也可以是针对焦距调整控件的触发操作,在此不做限制。
(2)虚拟视角的角度发生变化;所谓虚拟视角的角度发生变化可以是控制虚拟形象执行 角度范围内的旋转动作后产生的,例如社交服务页面可以包含角度调整控件,通过触发角度调整控件,可360度控制虚拟形象执行旋转动作,相应地,虚拟场景中的场景内容也可跟随虚拟形象的旋转动作变化更新显示。
(3)目标虚拟形象的位置发生变化;当虚拟形象在第一虚拟场景中进行位移运动时,也可以跟随虚拟形象的移动位置更新显示第一实景房间中的虚拟社交场景的场景内容,例如目标虚拟形象移动至活动区域时,将可更新显示活动区域中对应的虚拟社交场景的场景内容。
在一个实施例中,第一实景房间可以是主题定制化房间,主题例如可以是音乐主题、游戏主题、运动主题等等,当在第一实景房间内显示目标社交对象对应的目标虚拟形象时,可以输出与定制化主题相关的选项,例如针对音乐主题可以输出选择播放歌曲的选项,针对游戏主题可以输出选择游戏角色的选项,针对运动主题可以输出选择运动战队的选项,输出的与定制化主题相关的选项可以显示于社交服务页面中。不同的主题可以有不同的背景(包括颜色、装饰等)、虚拟道具布局等不同之处。进一步地,响应对定制化主题相关的选项的选择操作,还可以将目标虚拟形象的显示形态更新为与选中的选项相匹配的形态。其中,目标虚拟形象的显示形态可以是指目标虚拟形象在第一虚拟社交场景中的装扮,在定制化主题相关的选项被触发之后,可以将目标虚拟形象的装扮更新为与主题相匹配的装扮。例如,假设第一实景房间为运动主题房间,在第一实景房间内显示目标虚拟形象后,可以输出选择运动战队的选项,当目标社交对象选择运动战队之后,可以为目标虚拟形象替换上被选中的运动战队对应的队服,通过这种方式,丰富了目标虚拟形象的显示形态,同时提升了目标社交对象的体验感。
请参见图9,是本申请实施例提供的一种主题定制化房间的示意图,如图9中(1)所示,该第一实景房间可以是运动主题房间,当在第一实景房间内显示目标虚拟形象309时,可以输出与选择运动战队的选项3110,当目标社交对象选择运动战队之后,目标虚拟形象可以替换运动战队对应的队服,如图9中的(2)所示,目标虚拟形象313替换上了被选中的运动战队对应的队服,此外,在该第一实景房间内还包含虚拟电视机道具311和虚拟沙发道具312,在一种可能的实现方式中,目标社交对象触发虚拟电视机道具311,可以放大查看虚拟电视机道具311中的屏幕内容,并进入第一实景房间内的观影区域,与其他虚拟形象一起观看和交流,目标社交对象触发虚拟沙发道具,可以控制目标虚拟形象移动至虚拟沙发道具处休息。
在一个实施例中,在第一实景房间内显示目标社交对象对应的目标虚拟形象之后,可以控制目标虚拟形象进行位移运动,第一虚拟社交场景中可以包含社交活动区域,当目标虚拟形象移动进入至社交活动区域中时,可以显示社交活动的活动页面,使目标虚拟形象通过活动页面加入到社交活动中。在社交活动的活动页面中可以包含社交活动对应的活动操作控件,通过触发活动操作控件,可以控制目标虚拟形象在活动页面中进行社交活动。例如,社交活动区域可以是投篮区域,当目标虚拟形象进入至投篮区域时,可以显示投篮区域的活动页面,投篮区域的活动页面中可以包含控制目标虚拟形象执行投篮动作的活动操作控件,通过触发该活动操作控件,可以控制目标虚拟形象投篮。进一步地,该社交活动区域中还可以显示有其他参与社交活动的社交对象对应的虚拟形象,目标社交对象可以与其他社交对象进行活动比赛,通过这种方式可以进一步增强社交对象间互动的趣味性。社交活动区域例如可以是图10(1)中的观影区域401,当在观影区域401所在区域接收到社交对象的交互操作,例如点击、手势等操作,则进入社交活动的活动页面315。
在一个实施例中,第一虚拟场景中还可以包含社交活动的入口,该入口可以是社交活动页面对应的活动链接,当该社交活动的入口被触发时,也可以显示社交活动的活动页面。可选的,社交活动的入口可以显示在第一虚拟场景的社交会话流中,也可以与特定的虚拟道具绑定,作为举例,假设第一虚拟场景中包含虚拟篮球道具,则投篮活动对应的入口即可与虚拟篮球道具绑定,通过触发虚拟篮球道具,即可以显示投篮活动的对应活动页面。
请参见图10,是本申请实施例提供的一种社交活动的活动页的示意图,如图10中的(1)所示,在社交服务页面400中,包含观影区域401,观影区域401中播放的内容可以是通过 触发图9中的(1)所示虚拟电视机道具311后,放大显示的虚拟电视机道具311中的屏幕内容,社交活动的入口314可以是其他社交对象在社交会话流中分享的活动链接,通过触发社交活动的入口314可以显示如图10中的(2)所示的社交活动的活动页面315,社交活动的活动页面315中包含控制虚拟形象进行位移运动的虚拟控制控件316,和控制虚拟形象执行投篮动作的活动操作控件317。其中,该社交活动的活动页面315也可以是目标虚拟形象移动至第一实景房间中的社交活动区域后显示的。
通过实施本申请实施例,一方面,可以增强社交对象在AIO中的沉浸感和无缝体验,满足社交对象在进行社交互动时聊天和情感表达的复合需求,同时也有效保护了社交对象的对象信息;另一方面,借助虚拟形象和实景房间3D视觉的优势,使社交对象之间的社交互动具有更直观的视觉反馈,通过控制虚拟形象执行不同的社交互动操作的同时,既丰富了虚拟形象的显示效果,又提升了社交互动的趣味性。
请参见图11,图11是本申请实施例提供的另一种社交互动方法的流程示意图,该方法可以由计算机设备执行,计算机设备可以包括个人计算机、笔记本电脑、智能手机、平板电脑、智能手表、智能语音交互设备、智能家电、车载计算机设备、智能可穿戴设备和飞行器等具备显示功能的设备。该方法可以包括但不限于如下步骤:
S401:显示社交服务页面。
在本申请实施例中,步骤S401的具体实现方式可以参见图2对应实施例的介绍,在此不做赘述。
S402:在社交服务页面中将第一实景房间切换为第二实景房间。
在本申请实施例中,社交服务页面对应有房间展示列表,该房间展示列表中包括多个有序待显示的实景房间,每个实景房间分别对应一个虚拟社交场景,在社交服务页面中可以将第一实景房间切换为第二实景房间。
在一个实施例中,房间展示列表中的排序方式可以是以下任意一种:(1)按照多个实景房间分别对应的虚拟社交场景与目标社交对象的对象标签之间的匹配度进行排序,例如匹配度由高至低的顺序进行排序,这里,对象标签例如可以包括目标社交对象设置的互动偏好内容,也可以包括描述目标社交对象的特征的内容;(2)根据多个实景房间分别对应的社交场景的关注热度进行排序,其中,社交场景的关注热度可以是指社交场景中参与社交互动的社交对象的数量。
在一个实施例中,社交服务页面中显示的第一实景房间可以是房间列表中的任意一个,响应针对第一实景房间的切换操作,可以将该第一实景房间切换为房间展示列表中的第二实景房间,该第二实景房间可以是房间展示列表中排序位置位于第一实景房间之前的实景房间,也可以是房间展示列表中排序位置位于第一实景房间之后的实景房间,还可以是房间展示列表中随机获取的实景房间。具体的,在社交服务页面中将第一实景房间切换为第二实景房间可以包括以下任意一种情况:1、在第一虚拟社交场景的显示过程中,响应于接收到房间切换操作,则将社交服务页面中的第一虚拟社交场景切换为第二虚拟社交场景,房间切换操作可以包括但不限于滑动操作、手势操作、控件操作、悬浮手势操作、语音操作等等;2、当第一虚拟社交场景的显示时长达到预设时长时,将社交服务页面中的第一虚拟社交场景切换为第二虚拟社交场景,第一实景房间的显示时长可以是目标社交对象设定的一个固定值,当显示时长达到设定值时,可以自动切换实景房间;3、若在预设时长范围内未接收到针对第一虚拟社交场景的社交互动操作,则将社交服务页面中的第一虚拟社交场景切换为第二虚拟社交场景,第一虚拟社交场景中在预设时间范围内未存在社交互动操作,即代表目标社交对象对该第一虚拟场景中的社交互动可能不感兴趣,此时也可以自动切换实景房间,具体的实景房间切换情况可以由目标社交对象的设置决定,对此不做限制。
在一个实施例中,在社交服务页面中将第一实景房间切换为第二实景房间时,可以检测当前网络环境,基于当前网络环境可以确定第二实景房间的显示方式。具体的,若当前网络 环境为第一网络环境,例如顺畅网络环境,则可以直接加载第二实景房间,并在社交服务页面中将第一实景房间切换为第二实景房间;若当前网络环境为第二网络环境,例如弱网络环境,则可以显示社交服务页面相关联的快照页面,该快照页面可以是第二实景房间在社交服务页面的历史显示过程中,按照预设时间间隔对社交服务页面进行录制得到的,如此可以使目标社交对象快速了解第二实景房间中的基本主题以及社交互动情况,此时可以显示弱网络环境提示信息,以使目标社交对象能及时调整网络环境,待网络环境恢复为顺畅网络环境时,便可以加载第二实景房间;若当前无网络环境,则可以显示无网络提示信息,并在社交服务页面显示兜底图,兜底图是指预先设置用于无法加载情况下显示的默认图片。第一网络环境和第二网络环境是根据网络带宽或者网络速度来确定的,当网络带宽低于某个带宽阈值或者网络速度低于某个速度阈值时,为第二网络环境,反之则为第一网络环境。
本实施例提供的方法,将N个实景房间根据与对象标签之间的匹配度进行排序,从而优先向目标社交对象展示更匹配的实景房间,提高了实景房间的展示有效性。
本实施例提供的方法,将N个实景房间根据关注热度进行排序,从而优先向目标社交对象展示热度高的实景房间,提高了实景房间的展示准确率。
本实施例提供的方法,在接收到房间切换操作时在第一实景房间和第二实景房间之间进行切换,提高了实景房间之间的切换效率。
本实施例提供的方法,若预设时长范围内未接收到房间切换操作,则自动进行第一实景房间和第二实景房间之间的切换,提高了实景房间之间的切换效率。
在一个实施例中,将第一实景房间切换为房间展示列表中的第二实景房间可以通过图12中所示的步骤实现:
S91:启动第二实景房间加载。
S92:判断是否为顺畅网络环境。将第一实景房间切换为第二实景房间的过程中,当启动第二实景房间加载时,可以判断当前的网络环境是否为顺畅网络环境,以根据不同的网络环境调用不同的房间加载逻辑。当S92的判断结果为是顺畅网络环境,则执行下述的S93。若不是顺畅网络环境,则根据实际网络环境的具体情况执行不同处理,在无网络环境的情况下执行下述的S99,在弱网络环境的情况下执行下述的S95。可以根据网络带宽或者网速来确定是否为顺畅网络环境,并根据网络带宽或者网速来确定是否为无网络环境或者弱网络环境,可以根据不同的阈值来进行分析判断。
S93:启动分层加载逻辑。
若当前的网络环境为顺畅网络环境,可以启动分层加载逻辑,分层加载逻辑可以包括如下两方面的内容:
(1)在第二实景房间中,可以在默认初始生成点生成目标虚拟形象,该目标虚拟形象是基于虚拟镜头的默认拍摄视角生成的,第二实景房间初始加载时,可以基于默认拍摄视角显示目标虚拟形象就近区域的虚拟社交场景的场景内容,就近区域可以是与目标虚拟形象的距离在固定范围内的区域,随着加载进程推进,再逐步扩大显示范围,即虚拟镜头的深度及视野不断扩大,如此渐进加载,直至虚拟社交场景的场景内容全局可显示。
请参见图132,是本申请实施例提供的一种分层加载的示意图,如图13中的(1)所示,为基于虚拟镜头1310的默认拍摄视角在默认初始生成点生成目标虚拟形象的示意图,如图13中的(2)所示,为随着加载进程推进,虚拟镜头1310的深度及视野不断扩大的示意图。
(2)针对上述第一方面描述的随着加载进程推进,逐步扩大显示范围的同时,还可以搭配光影模式分层渲染,可以优先加载目标虚拟形象就近区域的3D场景资产,其他未显示范围区域可以以迷雾状态显示。在扩大显示范围的同时,光影模式也进行逐层缓存渲染,例如可以首先将场景内容进行白模(即未经上色的)置入,再进行无光影的纯贴图渲染,如广告横幅、虚拟形象装扮等都属于纯贴图渲染,然后按照虚拟镜头的深度及视野的不断扩大,逐步叠加全局光渲染效果。
请参见图14,是本申请实施例提供的一种光影模式渲染的示意图,如图14中的(1)所示,为光影模式渲染目标虚拟形象就近区域的示意图;如图14中的(2)所示,为随着加载进程推进,根据虚拟镜头的深度逐步叠加光影模式渲染的示意图;如图4d中的(3)所示,为全局叠加光影模式渲染的示意图;如图14中的(4)所示,为白模渲染的示意图;如图14中的(5)所示,为纯贴图渲染的示意图;如图14中的(6)所示,为光照场景的示意图。
S94:并行加载,直至加载成功。
在第二实景房间加载过程中,可以在虚拟镜头的深度并行处理的同时,也进行光影模式分层渲染。
S95:启动房间快照逻辑。
若当前的网络环境判断为弱网络环境,则可以启动房间快照逻辑,房间快照逻辑具体可以包括如下三方面内容:
(1)基于预设时间间隔可以在第二实景房间中的虚拟形象默认生成点录制预设时长的快照视频,并将快照视频存储至本地,以在需要的时候可以直接调取播放,且当前时间节点对应的快照视频可以覆盖前一时间节点对应的快照视频,如此可以保证快照视频的内容为第二实景房间中最新的社交互动情况。
(2)在目标社交对象切换实景房间的过程中,可以生成房间展示列表,房间展示列表中包括多个有序待显示的实景房间,例如,房间列表的展示顺序可以是基于目标社交对象的对象标签与实景房间之间的匹配度进行排序的,例如匹配度由高至低的顺序进行排序的。
(3)根据房间列表的展示顺序,可以预先下载多个实景房间对应的快照视频预存本地等待调用,例如可以下载序列前三位的实景房间对应的快照视频。
在一个实施例中,上述房间快照逻辑也可以用于房间中参与社交互动的社交对象数量超过阈值,社交对象需要排队进入房间时的加载处理。
S96:播放快照视频。若当前的网络环境判断为弱网络环境,可以播放基于上述房间快照逻辑生成的快照视频,使社交对象快速了解第二实景房间中的社交互动情况。
S97:判断是否已恢复顺畅网络环境。若恢复顺畅网络环境,则可以触发执行上述的S93,启动上述分层加载逻辑,加载显示第二实景房间。若未恢复顺畅网络环境,则可以执行下述的S98。
S98:循环播放本地快照视频,并提示弱网络状态直至恢复。
S99:显示无网络提示信息。
通过实施本申请实施例,一方面,可以将虚拟形象与贴近真实世界的实景房间相结合,让社交对象可以利用虚拟形象,在可互动的虚拟社交场景中具有更直观的互动反馈,增强了社交对象互动时的沉浸感。
另一方面,在切换不同实景房间时,可以通过分层加载逻辑和房间快照逻辑保证切换的顺滑度,有效提高了切换实景房间时的操作效率,进一步提升了社交对象的体验感。
请参见图15,图15是本申请实施例提供的一种社交互动装置的结构示意图。上述社交互动处理装置可以是运行于计算机设备中的一个计算机程序(包括程序代码),例如该社交互动装置为一个应用软件,该社交互动装置可以用于执行本申请实施例提供的方法中的相应步骤。如图15所示,该社交互动装置500可以包括:显示模块501和处理模块502。
显示模块501,用于显示社交服务页面;
处理模块502,用于响应于接收到针对第一虚拟社交场景的互动操作,在第一虚拟社交场景内显示目标社交对象对应的目标虚拟形象。
在一个实施例中,社交服务页面对应有房间展示列表,房间展示列表中包括N个有序待显示的实景房间,N为正整数;每个实景房间分别用于显示一个虚拟社交场景;第一实景房间是房间展示列表中的任一个;其中,N个实景房间在房间展示列表中的排序方式包括以下任一种:按照N个实景房间分别对应的虚拟社交场景与目标社交对象的对象标签之间的匹配 度进行排序;或者,根据N个实景房间分别对应的虚拟社交场景的关注热度进行排序。
在一个实施例中,房间展示列表中还包括第二实景房间,第二实景房间用于展示第二虚拟社交场景;处理模块502,具体用于:在社交服务页面中将第一虚拟社交场景切换为第二虚拟社交场景。
在一个实施例中,处理模块502还用于:在所述第一虚拟社交场景的显示过程中,响应于接收到房间切换操作,则将所述社交服务页面中的所述第一虚拟社交场景切换为所述第二虚拟社交场景;或者,当所述第一虚拟社交场景的显示时长达到预设时长时,将所述社交服务页面中的所述第一虚拟社交场景切换为所述第二虚拟社交场景。
在一个实施例中,处理模块502还用于:若在所述预设时长的范围内未接收到针对所述第一虚拟社交场景的社交互动操作,将所述社交服务页面中的所述第一虚拟社交场景切换为所述第二虚拟社交场景;其中,房间切换操作包括:滑动操作、手势操作、控件操作、悬浮手势操作、语音操作中的任一项。
在一个实施例中,显示模块501还用于在所述第一虚拟社交场景的显示过程中,在所述社交服务页面中显示所述第一虚拟社交场景的社交会话流,所述社交会话流中包括在所述第一虚拟社交场景中的社交对象产生的社交会话消息。
在一个实施例中,第一实景房间中包含在第一虚拟社交场景中进行社交互动的M个虚拟形象,M为正整数;第一虚拟社交场景的社交会话流包含M个虚拟形象进行社交互动时所产生的一个或多个社交会话消息;社交会话消息中包括:文本、表情、语音、视频、图像、链接中的任一种或者多种。
在一个实施例中,第一实景房间中显示有虚拟控制控件;处理模块502,具体用于:响应于接收到对所述第一实景房间中的虚拟控制控件的触发操作,在所述第一虚拟社交场景内显示所述目标社交对象对应的目标虚拟形象。
在一个实施例中,第一实景房间中包含在第一虚拟社交场景中进行社交互动的M个虚拟形象,M为正整数;处理模块502,具体用于:响应于接收到对所述M个虚拟形象中的参考虚拟形象发起的互动操作,在所述第一虚拟社交场景内显示所述目标社交对象对应的目标虚拟形象。
在一个实施例中,第一虚拟场景中包含虚拟道具,处理模块502,具体用于:响应于接收到对虚拟道具的触发操作,在第一虚拟社交场景内显示目标社交对象对应的目标虚拟形象;其中,目标社交对象对虚拟道具的触发操作包括:点击触发操作、语音触发操作、手势触发操作中的任一种。
在一个实施例中,处理模块502,还用于当接收到针对所述第一虚拟社交场景中社交会话消息的发送操作时,在所述第一虚拟社交场景内显示所述目标社交对象对应的目标虚拟形象。
在一个实施例中,处理模块502,还用于控制目标虚拟形象在第一虚拟社交场景中进行社交互动。
在一个实施例中,处理模块502,还用于当接收到目标社交对象在第一虚拟社交场景中发送的社交会话消息时,在第一实景房间中的目标虚拟形象的周侧预设位置显示社交会话消息。
在一个实施例中,处理模块502,还用于当目标社交对象发送的社交会话消息包含目标交互内容时,控制目标虚拟形象执行目标交互内容对应的一组对象动作。
在一个实施例中,处理模块502,还用于当目标虚拟形象在第一虚拟社交场景中选择动作选项时,控制目标虚拟形象执行被选择的动作选项对应的一组对象动作。
在一个实施例中,第一虚拟场景中包含虚拟道具,且虚拟道具备互动属性;处理模块502,还用于控制目标虚拟形象触发虚拟道具,以触发与虚拟道具的互动属性相匹配的互动内容。
在一个实施例中,虚拟道具包括第一类虚拟道具,第一类虚拟道具的互动属性用于指示第一虚拟社交场景的场景形态的变化信息;处理模块502,具体用于:按照第一类虚拟道具 的互动属性更新第一虚拟社交场景的场景形态。
在一个实施例中,虚拟道具包括第二类虚拟道具,第二类虚拟道具的互动属性用于指示第一虚拟社交场景中的一个或多个虚拟形象应当执行的对象动作;处理模块502,具体用于:按照第二类虚拟道具的互动属性,控制第一虚拟社交场景中的一个或多个虚拟形象执行相应的对象动作。
在一个实施例中,虚拟道具包括第三类虚拟道具,第三类虚拟道具的互动属性用于指示针对被触发的虚拟道具支持的反馈操作;处理模块502,具体用于:按照第三类虚拟道具的互动属性,接收对被触发的第三类虚拟道具执行的反馈操作。
在一个实施例中,第一虚拟社交场景中包含社交活动区域,或者,第一虚拟社交场景中包含社交活动的入口;处理模块502,具体用于:响应于所述第一虚拟社交场景中包含所述社交活动区域,当所述目标虚拟形象进入至所述社交活动区域中时,显示社交活动的活动页面,使所述目标虚拟形象通过所述活动页面加入至所述社交活动中;或者,响应于所述第一虚拟社交场景中包含社交活动的入口,当所述目标虚拟形象触发所述社交活动的入口时,显示所述社交活动的活动页面,使所述目标虚拟形象通过所述活动页面加入至所述社交活动中。
在一个实施例中,第一实景房间为主题定制化房间;处理模块502,具体用于:输出与定制化主题相关的选项;响应于选项的触发,将目标虚拟形象的显示形态更新为与被触发的选项相匹配的形态。
在一个实施例中,处理模块502,还用于:当目标虚拟形象在第一实景房间中的虚拟视角发生变化时,更新第一虚拟社交场景的场景内容;其中,虚拟视角发生变化包括:虚拟视角的焦距发生变化、虚拟视角的角度发生变化、目标虚拟形象的位置发生变化中的任意一种。
在一个实施例中,处理模块502,还用于:响应于接收到针对第一虚拟社交场景的互动操作,获取目标虚拟形象的显示属性;若目标虚拟形象的显示属性为隐藏属性,则在第一实景房间内将目标虚拟形象显示为透明状态;若目标虚拟形象的显示属性为外显属性,则在第一实景房间内将目标虚拟形象显示为非透明状态。
在一个实施例中,处理模块502,还用于:响应于对社交服务页面的显示指令,检测网络环境;若网络环境为第一网络环境,则显示社交服务页面;若网络环境为第二网络环境,则显示社交服务页相关联的快照页面,待网络环境变更为第一网络环境时,将快照页面替换为社交服务页面;若无网络环境,则显示无网络提示信息;其中,快照页面是在社交服务页面的历史显示过程中,按照预设时间间隔对社交服务页面进行录制得到的。
请参见图16,图16是本申请实施例提供的一种计算机设备的结构示意图。该计算机设备可以包括:网络接口601、存储器602和处理器603,网络接口601、存储器602和处理器603通过一条或多条通信总线连接,通信总线用于实现这些组件之间的连接通信。网络接口601可以包括标准的有线接口、无线接口(如WIFI接口)。存储器602可以包括易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM);存储器602也可以包括非易失性存储器(non-volatile memory),例如快闪存储器(flash memory),固态硬盘(solid-state drive,SSD)等;存储器602还可以包括上述种类的存储器的组合。处理器603可以是中央处理器(central processing unit,CPU)。处理器603还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)等。上述PLD可以是现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)等。
存储器602还用于存储程序指令,处理器603还可调用该程序指令,以实现本申请中相关方法及步骤。
此外,本申请还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时,实现前述实施例提供的方法。
本申请实施例还提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机 程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行前述实施例提供的方法。
本申请实施例方法中的步骤可以根据实际需要进行顺序调整、合并和删减。
本申请实施例装置中的单元可以根据实际需要进行合并、划分和删减。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,上述存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。

Claims (26)

  1. 一种社交互动方法,由终端执行,所述方法包括:
    显示社交服务页面,所述社交服务页面中显示有第一实景房间,所述第一实景房间对应第一虚拟社交场景;
    响应于接收到针对所述第一虚拟社交场景的互动操作,在所述第一虚拟社交场景内显示目标社交对象对应的目标虚拟形象,所述目标社交对象是所述终端主控的对象。
  2. 根据权利要求1所述的方法,其中,所述社交服务页面中包括房间展示列表,所述房间展示列表中包括N个有序待显示的实景房间,N为正整数;每个实景房间分别用于显示一个虚拟社交场景;所述第一实景房间是所述房间展示列表中的任一个;
    其中,所述N个实景房间在所述房间展示列表中的排序方式包括以下任一种:
    按照所述N个实景房间分别对应的虚拟社交场景与目标社交对象的对象标签之间的匹配度进行排序;或者,根据所述N个实景房间分别对应的虚拟社交场景的关注热度进行排序。
  3. 根据权利要求2所述的方法,其中,所述房间展示列表中还包括第二实景房间,所述第二实景房间用于展示第二虚拟社交场景;所述方法还包括:
    在所述第一虚拟社交场景的显示过程中,响应于接收到房间切换操作,则将所述社交服务页面中的所述第一虚拟社交场景切换为所述第二虚拟社交场景;或者,
    当所述第一虚拟社交场景的显示时长达到预设时长时,将所述社交服务页面中的所述第一虚拟社交场景切换为所述第二虚拟社交场景。
  4. 根据权利要求3所述的方法,其中,所述当所述第一虚拟社交场景的显示时长达到预设时长时,将所述社交服务页面中的所述第一虚拟社交场景切换为所述第二虚拟社交场景,包括:
    若在所述预设时长的范围内未接收到针对所述第一虚拟社交场景的社交互动操作,将所述社交服务页面中的所述第一虚拟社交场景切换为所述第二虚拟社交场景。
  5. 根据权利要求1至4任一所述的方法,其中,所述方法还包括:
    在所述第一虚拟社交场景的显示过程中,在所述社交服务页面中显示所述第一虚拟社交场景的社交会话流,所述社交会话流中包括在所述第一虚拟社交场景中的社交对象产生的社交会话消息。
  6. 根据权利要求5所述的方法,其中,所述第一实景房间中包含在所述第一虚拟社交场景中进行社交互动的M个虚拟形象,M为正整数;所述第一虚拟社交场景的社交会话流包含所述M个虚拟形象进行社交互动时所产生的一个或多个社交会话消息;
    所述社交会话消息中包括:文本、表情、语音、视频、图像、链接中的任一种或者多种。
  7. 根据权利要求1至4任一所述的方法,其中,所述响应于接收到针对所述第一虚拟社交场景的互动操作,在所述第一虚拟社交场景内显示目标社交对象对应的目标虚拟形象,包括:
    响应于接收到对所述第一实景房间中的虚拟控制控件的触发操作,在所述第一虚拟社交场景内显示所述目标社交对象对应的目标虚拟形象。
  8. 根据权利要求1至4任一所述的方法,其中,所述第一实景房间中包含在所述第一虚拟社交场景中进行社交互动的M个虚拟形象,M为正整数;所述响应于接收到针对所述第一虚拟社交场景的互动操作,在所述第一虚拟社交场景内显示目标社交对象对应的目标虚拟形象,包括:
    响应于接收到对所述M个虚拟形象中的参考虚拟形象发起的互动操作,在所述第一虚拟社交场景内显示所述目标社交对象对应的目标虚拟形象。
  9. 根据权利要求1至4任一所述的方法,其中,所述第一虚拟场景中包含虚拟道具;所述响应于接收到针对所述第一虚拟社交场景的互动操作,在所述第一虚拟社交场景内显示目 标社交对象对应的目标虚拟形象,包括:
    响应于接收到对所述虚拟道具的触发操作,在所述第一虚拟社交场景内显示所述目标社交对象对应的目标虚拟形象;
    其中,所述目标社交对象对所述虚拟道具的触发操作包括:点击触发操作、语音触发操作、手势触发操作中的任一种。
  10. 根据权利要求1至4任一所述的方法,其中,所述虚拟社交场景,包括:
    当接收到针对所述第一虚拟社交场景中社交会话消息的发送操作时,在所述第一虚拟社交场景内显示所述目标社交对象对应的目标虚拟形象。
  11. 根据权利要求1至4任一所述的方法,其中,所述方法还包括:
    控制所述目标虚拟形象在所述第一虚拟社交场景中进行社交互动。
  12. 根据权利要求11所述的方法,其中,所述控制所述目标虚拟形象在所述第一虚拟社交场景中进行社交互动,包括:
    当接收到目标社交对象在所述第一虚拟社交场景中发送的社交会话消息时,在所述第一实景房间中的目标虚拟形象的周侧预设位置显示所述社交会话消息。
  13. 根据权利要求11所述的方法,其中,所述控制所述目标虚拟形象在所述第一虚拟社交场景中进行社交互动,包括:
    当所述目标社交对象发送的社交会话消息包含目标交互内容时,控制所述目标虚拟形象执行所述目标交互内容对应的一组对象动作。
  14. 根据权利要求11所述的方法,其中,所述控制所述目标虚拟形象在所述第一虚拟社交场景中进行社交互动,包括:
    当所述目标虚拟形象在所述第一虚拟社交场景中选择动作选项时,控制所述目标虚拟形象执行被选择的动作选项对应的一组对象动作。
  15. 根据权利要求11所述的方法,其中,所述第一虚拟场景中包含虚拟道具,且所述虚拟道具备互动属性;所述控制所述目标虚拟形象在所述第一虚拟社交场景中进行社交互动,包括:
    控制所述目标虚拟形象触发所述虚拟道具,触发与所述虚拟道具的互动属性相匹配的互动内容。
  16. 根据权利要求15所述的方法,其中,所述虚拟道具包括第一类虚拟道具,所述第一类虚拟道具的互动属性用于指示所述第一虚拟社交场景的场景形态的变化信息;
    所述触发与所述虚拟道具的互动属性相匹配的互动内容,包括:
    按照所述第一类虚拟道具的互动属性,更新所述第一虚拟社交场景的场景形态。
  17. 根据权利要求15所述的方法,其中,所述虚拟道具包括第二类虚拟道具,所述第二类虚拟道具的互动属性用于指示所述第一虚拟社交场景中的一个或多个虚拟形象执行对象动作;
    所述触发与所述虚拟道具的互动属性相匹配的互动内容,包括:
    按照所述第二类虚拟道具的互动属性,控制所述第一虚拟社交场景中的一个或多个虚拟形象执行相应的对象动作。
  18. 根据权利要求15所述的方法,其中,所述虚拟道具包括第三类虚拟道具,所述第三类虚拟道具的互动属性用于指示针对被触发的虚拟道具支持的反馈操作;
    所述触发与所述虚拟道具的互动属性相匹配的互动内容,包括:
    按照所述第三类虚拟道具的互动属性,接收对所述被触发的第三类虚拟道具执行的反馈操作。
  19. 根据权利要求1至4任一所述的方法,其中,第一虚拟社交场景中包含社交活动区域,或者,所述第一虚拟社交场景中包含社交活动的入口;所述方法还包括:
    响应于所述第一虚拟社交场景中包含所述社交活动区域,当所述目标虚拟形象进入至所 述社交活动区域中时,显示社交活动的活动页面,使所述目标虚拟形象通过所述活动页面加入至所述社交活动中;或者,
    响应于所述第一虚拟社交场景中包含社交活动的入口,当所述目标虚拟形象触发所述社交活动的入口时,显示所述社交活动的活动页面,使所述目标虚拟形象通过所述活动页面加入至所述社交活动中。
  20. 根据权利要求1至4任一所述的方法,其中,所述方法还包括:
    当所述目标虚拟形象在所述第一实景房间中的虚拟视角发生变化时,更新所述第一虚拟社交场景的场景内容;
    其中,所述虚拟视角发生变化包括:所述虚拟视角的焦距发生变化、所述虚拟视角的角度发生变化、所述目标虚拟形象的位置发生变化中的任意一种。
  21. 根据权利要求1至4任一所述的方法,其中,响应于接收到针对所述第一虚拟社交场景的互动操作,在所述第一虚拟社交场景内显示目标社交对象对应的目标虚拟形象,包括:
    响应于接收到针对所述第一虚拟社交场景的互动操作,获取所述目标虚拟形象的显示属性;
    若所述目标虚拟形象的显示属性为隐藏属性,在所述第一实景房间内将所述目标虚拟形象显示为透明状态;
    若所述目标虚拟形象的显示属性为外显属性,在所述第一实景房间内将所述目标虚拟形象显示为非透明状态。
  22. 根据权利要求1至4任一所述的方法,其中,所述显示所述社交服务页面之前,所述方法还包括:
    响应于对所述社交服务页面的显示指令,检测网络环境;
    若网络环境为第一网络环境,则显示所述社交服务页面;
    若网络环境为第二网络环境,则显示所述社交服务页相关联的快照页面,待所述网络环境变更为第一网络环境时,将所述快照页面替换为所述社交服务页面;
    若无网络环境,则显示无网络提示信息;
    其中,所述快照页面是在所述社交服务页面的历史显示过程中,按照预设时间间隔对所述社交服务页面进行录制得到的。
  23. 一种社交互动装置,包括:
    显示模块,用于显示社交服务页面,所述社交服务页面中显示有第一实景房间,所述第一实景房间对应第一虚拟社交场景;
    处理模块,用于响应于接收到针对所述第一虚拟社交场景的互动操作,在所述第一虚拟社交场景内显示目标社交对象对应的目标虚拟形象,所述目标社交对象是显示所述社交服务页面的终端主控的对象。
  24. 一种计算机设备,包括存储器、处理器以及网络接口,所述处理器与所述存储器、所述网络接口相连,其中,所述网络接口用于提供网络通信功能,所述存储器用于存储程序代码,所述处理器用于调用所述程序代码,执行权利要求1-22任一项所述的方法。
  25. 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,该计算机程序被处理器执行时,实现权利要求1-22任一项所述的方法。
  26. 一种计算机程序产品,所述计算机程序产品包括计算机程序或计算机指令,所述计算机程序或计算机指令被处理器执行时实现权利要求1-22中任一项所述的方法。
PCT/CN2022/109448 2022-01-29 2022-08-01 社交互动方法、装置、设备及存储介质、程序产品 WO2023142415A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/324,593 US20230298290A1 (en) 2022-01-29 2023-05-26 Social interaction method and apparatus, device, storage medium, and program product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210112298.XA CN116561439A (zh) 2022-01-29 2022-01-29 一种社交互动方法、装置、设备及存储介质、程序产品
CN202210112298.X 2022-01-29

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/324,593 Continuation US20230298290A1 (en) 2022-01-29 2023-05-26 Social interaction method and apparatus, device, storage medium, and program product

Publications (1)

Publication Number Publication Date
WO2023142415A1 true WO2023142415A1 (zh) 2023-08-03

Family

ID=87470300

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/109448 WO2023142415A1 (zh) 2022-01-29 2022-08-01 社交互动方法、装置、设备及存储介质、程序产品

Country Status (3)

Country Link
US (1) US20230298290A1 (zh)
CN (1) CN116561439A (zh)
WO (1) WO2023142415A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130031475A1 (en) * 2010-10-18 2013-01-31 Scene 53 Inc. Social network based virtual assembly places
CN109445577A (zh) * 2018-10-11 2019-03-08 腾讯科技(深圳)有限公司 虚拟房间切换方法、装置、电子设备及存储介质
CN110020881A (zh) * 2018-01-05 2019-07-16 金德奎 一种基于游戏的社交方法、广告及信息传播方法
CN112073742A (zh) * 2020-09-01 2020-12-11 腾讯科技(深圳)有限公司 基于直播间的互动方法、装置、存储介质及计算机设备
CN113965812A (zh) * 2021-12-21 2022-01-21 广州虎牙信息科技有限公司 直播方法、系统及直播设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130031475A1 (en) * 2010-10-18 2013-01-31 Scene 53 Inc. Social network based virtual assembly places
CN110020881A (zh) * 2018-01-05 2019-07-16 金德奎 一种基于游戏的社交方法、广告及信息传播方法
CN109445577A (zh) * 2018-10-11 2019-03-08 腾讯科技(深圳)有限公司 虚拟房间切换方法、装置、电子设备及存储介质
CN112073742A (zh) * 2020-09-01 2020-12-11 腾讯科技(深圳)有限公司 基于直播间的互动方法、装置、存储介质及计算机设备
CN113965812A (zh) * 2021-12-21 2022-01-21 广州虎牙信息科技有限公司 直播方法、系统及直播设备

Also Published As

Publication number Publication date
US20230298290A1 (en) 2023-09-21
CN116561439A (zh) 2023-08-08

Similar Documents

Publication Publication Date Title
US11595339B2 (en) System and method of embedding rich media into text messages
US10511833B2 (en) Controls and interfaces for user interactions in virtual spaces
KR102096799B1 (ko) 채팅 대화들에서 임베디드 애플리케이션들과 함께 사용하기 위한 제안된 아이템들
US20180096507A1 (en) Controls and Interfaces for User Interactions in Virtual Spaces
EP3306444A1 (en) Controls and interfaces for user interactions in virtual spaces using gaze tracking
US20180095636A1 (en) Controls and Interfaces for User Interactions in Virtual Spaces
US20180367483A1 (en) Embedded programs and interfaces for chat conversations
US7707520B2 (en) Method and apparatus for providing flash-based avatars
US20110244954A1 (en) Online social media game
US20110225498A1 (en) Personalized avatars in a virtual social venue
US20220291808A1 (en) Integrating Artificial Reality and Other Computing Devices
WO2022001552A1 (zh) 消息发送方法、消息接收方法、装置、设备及介质
WO2023142425A1 (zh) 社交互动方法、装置、设备、存储介质及程序产品
WO2023142415A1 (zh) 社交互动方法、装置、设备及存储介质、程序产品
WO2024037001A1 (zh) 互动数据处理方法、装置、电子设备、计算机可读存储介质及计算机程序产品
WO2024041270A1 (zh) 虚拟场景中的交互方法、装置、设备及存储介质
WO2024060888A1 (zh) 虚拟场景的交互处理方法、装置、电子设备、计算机可读存储介质及计算机程序产品
US20230344953A1 (en) Camera settings and effects shortcuts
WO2023211660A1 (en) Camera settings and effects shortcuts
CN116567325A (zh) 对象互动方法、装置、终端和存储介质
Hershman Touch-Sensitivity and Other

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22923220

Country of ref document: EP

Kind code of ref document: A1