CN113289348A - Object processing method, client, server, electronic device and storage medium - Google Patents

Object processing method, client, server, electronic device and storage medium Download PDF

Info

Publication number
CN113289348A
CN113289348A CN202010351778.2A CN202010351778A CN113289348A CN 113289348 A CN113289348 A CN 113289348A CN 202010351778 A CN202010351778 A CN 202010351778A CN 113289348 A CN113289348 A CN 113289348A
Authority
CN
China
Prior art keywords
display object
data
sub
display
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010351778.2A
Other languages
Chinese (zh)
Inventor
余隽彦
付豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lingxi Interactive Entertainment Holding Co ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010351778.2A priority Critical patent/CN113289348A/en
Publication of CN113289348A publication Critical patent/CN113289348A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/86Watching games played by other players
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/57Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of game services offered to the player
    • A63F2300/577Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of game services offered to the player for watching a game played by other players

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses an object processing method, a client, a server, an electronic device and a storage medium. The object processing method comprises the following steps: acquiring display object data, wherein the display object data comprises sub-display object data of a sub-display object obtained by dividing the display object; and displaying the display object according to the display object data. According to the technical scheme, the display object is divided into the sub-display objects, and the sub-display objects are displayed according to the sub-display object data, so that the display of the display object is realized.

Description

Object processing method, client, server, electronic device and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of data processing, in particular to an object processing method, a client, a server, an electronic device and a storage medium.
Background
With the development of scientific technology and network technology, the use of network data transmission-based multiparty interactive application programs and application objects is more and more widespread, such as large multiparty interactive games and the like. In multi-party interactive application, a plurality of participants are expected to see the operation results of the participants and the operation results of other participants at the same time, so that the interaction efficiency, the real-time performance and the accuracy are improved, and higher requirements are provided for the real-time display and the update of an operation object.
Disclosure of Invention
The embodiment of the disclosure provides an object processing method, a client, a server, an electronic device and a storage medium.
In a first aspect, an embodiment of the present disclosure provides an object processing method.
The object processing method comprises the following steps:
acquiring display object data, wherein the display object data comprises sub-display object data of a sub-display object obtained by dividing the display object;
and displaying the display object according to the display object data.
With reference to the first aspect, in a first implementation manner of the first aspect, the object processing method further includes:
and locally dividing the display object at the client to obtain the sub-display object data of the sub-display object.
With reference to the first aspect, in a second implementation manner of the first aspect, the method further includes:
and acquiring the sub-display object data of the sub-display object from the server.
With reference to the first aspect, the first implementation manner of the first aspect, and any one of the second implementation manner of the first aspect, in a third implementation manner of the first aspect, the segmenting the display object includes:
and segmenting the display object according to a preset mode.
With reference to the first aspect, or any one of the first implementation manner and the second implementation manner of the first aspect, in a fourth implementation manner of the first aspect, the segmenting the display object includes:
and determining the position related to the interaction of the display object according to the interaction data of the interaction application object and the display object or the corresponding interaction body of the display object, and segmenting the display object according to the position related to the interaction of the display object.
With reference to the fourth implementation manner of the first aspect, in a fifth implementation manner of the first aspect, the method further includes:
and determining the sub-display object data of the sub-display object related to the interaction according to the interaction data of the interaction application object and the display object or the corresponding interaction body of the display object.
With reference to the third implementation manner or the fourth implementation manner of the first aspect, the present disclosure is in a sixth implementation manner of the first aspect, wherein the interaction data includes at least one or more of the following: interactive operation type, interactive operation angle, interactive operation speed, interactive operation strength and attributes of the interactive application object.
With reference to any one of the first aspect to the sixth implementation manner of the first aspect, in a seventh implementation manner of the first aspect, the method further includes:
acquiring display object update data, wherein the display object update data comprises update sub-display object data of a designated sub-display object and/or additional display object data of an additional display object, and the display object update data is used for rendering or displaying the update display object.
With reference to the seventh implementation manner of the first aspect, the present disclosure is in an eighth implementation manner of the first aspect, wherein at least one or more of the following items of the designated sub-display object are changed compared to the display object by the updated display object: color, shape, transparency, texture, pattern, material.
With reference to the seventh implementation manner or the eighth implementation manner of the first aspect, the present disclosure is in a ninth implementation manner of the first aspect, wherein the additional display object data is used to determine at least one or more of the following of the additional display objects: position, color, shape, transparency, texture, pattern, material.
With reference to any one of the seventh implementation manner to the ninth implementation manner of the first aspect, in a tenth implementation manner of the first aspect, the method further includes:
determining, from interaction data of an interaction application object with the display object or with a corresponding interaction body of the display object, at least one or more of the following to generate the display object update data: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
With reference to the tenth implementation manner of the first aspect, the present disclosure is in an eleventh implementation manner of the first aspect, wherein the interaction data includes at least one or more of: interactive operation type, interactive operation angle, interactive operation speed, interactive operation strength and attributes of the interactive application object.
With reference to the tenth implementation manner of the first aspect, the present disclosure is in a twelfth implementation manner of the first aspect, wherein the determining, according to interaction data of an interaction application object and the display object or a corresponding interaction body of the interaction application object, at least one or more of the following items to generate the display object update data: the designated child display object, the updated child display object data of the designated child display object, the additional display object data, comprising:
determining, from interaction data of interaction enforcement objects from one or more clients with the display object or with corresponding interactors of the display object, at least one or more of: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
With reference to the twelfth implementation manner of the first aspect, in a thirteenth implementation manner of the first aspect, the method further includes:
and a plurality of clients synchronously receive the display object updating data.
With reference to the tenth implementation manner of the first aspect, the present disclosure is in a fourteenth implementation manner of the first aspect, wherein the determining, according to interaction data of an interaction application object and the display object or a corresponding interaction body of the interaction application object, at least one or more of the following items to generate the display object update data: the designated child display object, the updated child display object data of the designated child display object, the additional display object data, comprising:
according to interaction data of an interaction application object and the display object or an interaction body corresponding to the display object sent by a server from one or more clients, determining at least one or more of the following items: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
With reference to any one of the seventh implementation manner of the first aspect to the fourteenth implementation manner of the first aspect, the present disclosure is in a fifteenth implementation manner of the first aspect, wherein:
the display object update data comprises whole rendering data of the updated display object or whole display data of the updated display object; or
The display object update data comprises incremental rendering data or incremental display data of the updated display object compared to the display object.
With reference to any one of the seventh implementation manner to the fourteenth implementation manner of the first aspect, the present disclosure is in a sixteenth implementation manner of the first aspect, wherein:
the acquiring of the display object update data includes: and acquiring the display object updating data from a server side or locally acquiring the display object updating data from a client side.
With reference to any one of the seventh implementation manner to the fourteenth implementation manner of the first aspect, in a seventeenth implementation manner of the first aspect, the disclosure further includes: and displaying the updated display object according to the display object updating data.
With reference to the first aspect, in an eighteenth implementation manner of the first aspect, the acquiring display object data includes:
and acquiring display object data from a server, wherein the display object data comprises display data used for displaying the rendered display object.
With reference to the first aspect, or any one of the first implementation manner and the second implementation manner of the first aspect, in a nineteenth implementation manner of the first aspect, the acquiring display object data further includes:
generating a rendering mesh for rendering a display object according to a result of the segmentation of the display object,
wherein the displaying the display object according to the display object data includes:
and rendering the display object according to the rendering grid.
With reference to the nineteenth implementation manner of the first aspect, in a twentieth implementation manner of the first aspect, the rendering a display object according to a rendering grid includes:
rendering the display object based on the vertex of the surface layer of the display object in the rendering mesh according to the condition that no sub-display object which is not displayed exists in the display object.
With reference to the nineteenth implementation manner of the first aspect, in a twenty-first implementation manner of the first aspect, the generating a rendering mesh for rendering a display object according to a segmentation result of the display object further includes:
determining a sub-display object related to the interaction according to the interaction data of the interaction application object and the display object or the interaction body corresponding to the display object and the segmentation result of the display object;
and modifying the index information of the grids of the sub-display objects related to the interaction in the rendering grids according to the sub-display objects related to the interaction.
With reference to the twenty-first implementation manner of the first aspect, in a twenty-second implementation manner of the first aspect, the modifying index information of a mesh of a sub-display object involved in an interaction in the rendering mesh includes:
modifying index information of a mesh of a sub-display object involved in an interaction in the rendering mesh such that an inner wall of a gap or hole through the three-dimensional display object is rendered in the rendering mesh of the sub-display object that is not being displayed.
With reference to the twenty-first implementation manner of the first aspect, the present disclosure is implemented in a twenty-third implementation manner of the first aspect, where the display object is a three-dimensional display object with a multilayer structure, and the corresponding interaction bodies are arranged between layers of the three-dimensional display object.
With reference to the twenty-first implementation manner of the first aspect, in a twenty-fourth implementation manner of the first aspect, the rendering a display object according to a rendering grid includes:
and performing mapping rendering on the grids of the sub-display objects involved in the interaction in the rendering grids.
With reference to the twenty-fourth implementation manner of the first aspect, in a twenty-fifth implementation manner of the first aspect, the performing a map rendering on a grid of a sub-display object involved in an interaction in the rendering grid includes:
and utilizing at least one map to map and render the boundary of the grid of the sub-display object related to the interaction in the rendered grid and the area adjacent to the boundary.
With reference to the twenty-fourth implementation manner of the first aspect, in a twenty-sixth implementation manner of the first aspect, the performing a map rendering on a grid of a sub-display object involved in an interaction in the rendering grid includes:
clearing current chartlet template data of the rendering grid;
rendering an object template of the current display object;
and according to the object template, performing mapping rendering on the grids of the sub-display objects related to the interaction in the rendering grids.
With reference to the nineteenth implementation manner of the first aspect, the present disclosure is in a twenty-seventh implementation manner of the first aspect, wherein the sub display object has a sub display object collision volume for realizing an interaction effect of the sub display object with an interaction applying object,
wherein the generating a rendering mesh for rendering a display object according to the segmentation result of the display object further comprises:
modifying index information of a mesh of a sub-display object involved in an interaction in the rendering mesh to delete the sub-display object collision volume, or to set the sub-display object collision volume as unavailable, or to set a volume of the sub-display object collision volume as 0, according to interaction data of an interaction applying object and the sub-display object collision volume.
With reference to the nineteenth implementation manner of the first aspect, the present disclosure is in a twenty-eighth implementation manner of the first aspect, wherein the rendering the display object according to the rendering grid includes at least one of:
applying a visual basis characteristic to the rendering mesh;
applying texture characteristics to the render mesh;
applying a lighting characteristic to the rendering mesh.
In a second aspect, an embodiment of the present disclosure provides an object processing method.
The object processing method comprises the following steps:
acquiring display object data, wherein the display object data comprises sub-display object data of a sub-display object obtained by dividing the display object;
and sending the display object data to a client for displaying the display object.
With reference to the second aspect, in a first implementation manner of the second aspect, the method further includes:
and locally dividing the display object at the server to obtain the sub-display object data of the sub-display object.
With reference to the second aspect, in a second implementation manner of the second aspect, the method further includes:
and acquiring the sub-display object data of the sub-display object from the client.
With reference to the second aspect, or any one of the first implementation manner and the second implementation manner of the second aspect, in a third implementation manner of the second aspect, the segmenting the display object includes:
and segmenting the display object according to a preset mode.
With reference to the second aspect, or any one of the first implementation manner and the second implementation manner of the second aspect, in a fourth implementation manner of the second aspect, the segmenting the display object includes:
and determining the position related to the interaction of the display object according to the interaction data of the interaction application object and the display object or the corresponding interaction body of the display object, and segmenting the display object according to the position related to the interaction of the display object.
With reference to the fourth implementation manner of the second aspect, in a fifth implementation manner of the second aspect, the method further includes:
and determining the sub-display object data of the sub-display object related to the interaction according to the interaction data of the interaction application object and the display object or the corresponding interaction body of the display object.
With reference to any one of the third implementation manner to the fifth implementation manner of the second aspect, the present disclosure is in a sixth implementation manner of the second aspect, wherein the interaction data includes at least one or more of: interactive operation type, interactive operation angle, interactive operation speed, interactive operation strength and attributes of the interactive application object.
With reference to the second aspect or any one of the first implementation manner to the sixth implementation manner of the second aspect, in a seventh implementation manner of the second aspect, the present disclosure further includes:
acquiring display object update data, wherein the display object update data comprises update sub-display object data of a designated sub-display object and/or additional display object data of an additional display object, and the display object update data is used for rendering or displaying the update display object.
With reference to the seventh implementation manner of the second aspect, the present disclosure is in an eighth implementation manner of the second aspect, wherein at least one or more of the following items of the designated sub-display object are changed by the updated display object compared to the display object: color, shape, transparency, texture, pattern, material.
With reference to the seventh implementation manner of the second aspect or the eighth implementation manner of the second aspect, the present disclosure is in a ninth implementation manner of the second aspect, wherein the additional display object data is used to determine at least one or more of the following of the additional display objects: position, color, shape, transparency, texture, pattern, material.
With reference to any one of the seventh implementation manner of the second aspect to the ninth implementation manner of the second aspect, in a tenth implementation manner of the second aspect, the method further includes:
determining, from interaction data of an interaction application object with the display object or with a corresponding interaction body of the display object, at least one or more of the following to generate the display object update data: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
With reference to the tenth implementation manner of the second aspect, the present disclosure is in an eleventh implementation manner of the second aspect, wherein the interaction data includes at least one or more of: interactive operation type, interactive operation angle, interactive operation speed, interactive operation strength and attributes of the interactive application object.
With reference to the tenth implementation manner of the second aspect, the present disclosure is in a twelfth implementation manner of the second aspect, wherein the determining, according to interaction data of an interaction application object and the display object or a corresponding interaction body of the display object, at least one or more of the following items to generate the display object update data: the designated child display object, the updated child display object data of the designated child display object, the additional display object data, comprising:
determining, from interaction data of interaction enforcement objects from one or more clients with the display object or with corresponding interactors of the display object, at least one or more of: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
With reference to the twelfth implementation manner of the second aspect, in a thirteenth implementation manner of the second aspect, the method further includes:
and synchronously sending the display object updating data to the plurality of clients.
With reference to any one of the seventh implementation manner of the second aspect to the thirteenth implementation manner of the second aspect, the present disclosure is in a fourteenth implementation manner of the second aspect, wherein:
the display object update data comprises whole rendering data of the updated display object or whole display data of the updated display object; or
The display object update data comprises incremental rendering data or incremental display data of the updated display object compared to the display object.
With reference to any one of the seventh implementation manner of the second aspect to the thirteenth implementation manner of the second aspect, the present disclosure is in a fifteenth implementation manner of the second aspect, wherein:
the acquiring of the display object update data includes: and locally acquiring the display object updating data from a server or acquiring the display object updating data from a client.
With reference to the second aspect, or any one of the first implementation manner and the second implementation manner of the second aspect, in a sixteenth implementation manner of the second aspect, the sending the display object data to a client for displaying the display object includes:
transmitting only a portion of the display object data to the client that is different from display object data previously transmitted to the client.
With reference to the second aspect, in a seventeenth implementation manner of the second aspect, the sending the display object data to a client for displaying the display object includes:
sending display object data to a client for displaying the display object, wherein the display object data comprises display data for displaying the rendered display object.
With reference to the second aspect, or any one of the first implementation manner and the second implementation manner of the second aspect, in an eighteenth implementation manner of the second aspect, the acquiring display object data further includes:
generating a rendering grid for rendering the display object according to the segmentation result of the display object;
and rendering the display object according to the rendering grid.
With reference to the eighteenth implementation manner of the second aspect, in a nineteenth implementation manner of the second aspect, the rendering the display object according to the rendering grid includes:
rendering the display object based on the vertex of the surface layer of the display object in the rendering mesh according to the condition that no sub-display object which is not displayed exists in the display object.
With reference to the eighteenth implementation manner of the second aspect, in a twentieth implementation manner of the second aspect, the generating a rendering mesh for rendering a display object according to a segmentation result of the display object further includes:
determining a sub-display object related to the interaction according to the interaction data of the interaction application object and the display object or the interaction body corresponding to the display object and the segmentation result of the display object;
and modifying the index information of the grids of the sub-display objects related to the interaction in the rendering grids according to the sub-display objects related to the interaction.
With reference to the twentieth implementation manner of the second aspect, in a twenty-first implementation manner of the second aspect, the modifying index information of a mesh of a sub-display object involved in an interaction in the rendering mesh includes:
modifying index information of a mesh of a sub-display object involved in an interaction in the rendering mesh such that an inner wall of a gap or hole through the three-dimensional display object is rendered in the rendering mesh of the sub-display object that is not being displayed.
With reference to the twentieth implementation manner of the second aspect, the present disclosure is directed to a twenty-second implementation manner of the second aspect, wherein the display object is a three-dimensional display object with a multilayer structure, and the corresponding interaction bodies are arranged between the layers of the three-dimensional display object.
With reference to the twentieth implementation manner of the second aspect, in a twenty-third implementation manner of the second aspect, the rendering the display object according to the rendering grid includes:
and performing mapping rendering on the grids of the sub-display objects involved in the interaction in the rendering grids.
With reference to the twenty-third implementation manner of the second aspect, in a twenty-fourth implementation manner of the second aspect, the performing a map rendering on a grid of a sub-display object involved in an interaction in the rendering grid includes:
and utilizing at least one map to map and render the boundary of the grid of the sub-display object related to the interaction in the rendered grid and the area adjacent to the boundary.
With reference to the twenty-third implementation manner of the second aspect, in a twenty-fifth implementation manner of the second aspect, the performing a map rendering on a grid of a sub-display object involved in an interaction in the rendering grid includes:
clearing current chartlet template data of the rendering grid;
rendering an object template of the current display object;
and according to the object template, performing mapping rendering on the grids of the sub-display objects related to the interaction in the rendering grids.
With reference to the eighteenth implementation of the second aspect, the present disclosure is in a twenty-sixth implementation of the second aspect, wherein the sub display object has a sub display object collision volume for realizing an interaction effect of the sub display object with an interaction applying object,
wherein the generating a rendering mesh for rendering a display object according to the segmentation result of the display object further comprises:
modifying index information of a mesh of a sub-display object involved in an interaction in the rendering mesh to delete the sub-display object collision volume, or to set the sub-display object collision volume as unavailable, or to set a volume of the sub-display object collision volume as 0, according to interaction data of an interaction applying object and the sub-display object collision volume.
With reference to the eighteenth implementation manner of the second aspect, in a twenty-seventh implementation manner of the second aspect, the rendering the display object according to the rendering grid includes at least one of:
applying a visual basis characteristic to the rendering mesh;
applying texture characteristics to the render mesh;
applying a lighting characteristic to the rendering mesh.
In a third aspect, an embodiment of the present disclosure provides a virtual object processing method.
The virtual object processing method comprises the following steps:
acquiring virtual object data, wherein the virtual object data comprises sub-virtual object data of sub-virtual objects obtained by dividing virtual objects;
and displaying the virtual object according to the virtual object data or sending the virtual object data.
In a fourth aspect, an embodiment of the present disclosure provides an object processing apparatus.
The object processing apparatus includes:
a first obtaining module configured to obtain display object data, wherein the display object data includes sub-display object data of a sub-display object obtained by dividing a display object;
a first display module configured to display the display object according to the display object data.
In a fifth aspect, an embodiment of the present disclosure provides an object processing apparatus.
The object processing apparatus includes:
a fourth obtaining module configured to obtain display object data, wherein the display object data includes sub-display object data of a sub-display object obtained by dividing a display object;
a first sending module configured to send the display object data to a client for displaying the display object.
In a sixth aspect, an embodiment of the present disclosure provides a virtual object processing apparatus.
The virtual object processing apparatus includes:
a seventh obtaining module configured to obtain virtual object data, wherein the virtual object data includes sub-virtual object data of a sub-virtual object obtained by dividing a virtual object;
a processing module configured to display the virtual object according to the virtual object data or to transmit the virtual object data.
In a seventh aspect, an embodiment of the present disclosure provides a client.
The client comprises:
an eighth obtaining module configured to obtain display object data, wherein the display object data includes sub-display object data of a sub-display object obtained by dividing a display object;
a third display module configured to display the display object according to the display object data.
In an eighth aspect, an embodiment of the present disclosure provides a server.
The server side comprises:
a ninth obtaining module configured to obtain display object data, wherein the display object data includes sub-display object data of a sub-display object obtained by dividing a display object;
a second sending module configured to send the display object data to a client for displaying the display object.
In a tenth aspect, an embodiment of the present disclosure provides a virtual reality object modification method, including:
acquiring virtual reality object data of a virtual reality object, wherein the virtual reality object data comprises sub-virtual reality object data of sub-virtual reality objects obtained by dividing the virtual reality object;
determining a sub-virtual reality object involved in the modification according to the modification data of the modification applied object and the virtual reality object;
acquiring virtual reality object modification data, wherein the virtual reality object modification data comprise modified sub-virtual reality object data of a sub-virtual reality object involved in modification and/or additional virtual reality object data of an additional virtual reality object, and the virtual reality object modification data are used for rendering or displaying a modification result of the virtual reality object.
In an eleventh aspect, an embodiment of the present disclosure provides a virtual reality object modification apparatus, including:
an acquisition module configured to acquire virtual reality object data of a virtual reality object, wherein the virtual reality object data includes sub-virtual reality object data of a sub-virtual reality object obtained by segmenting the virtual reality object;
a virtual reality adaptation module configured to determine a sub-virtual reality object involved in an adaptation according to adaptation data of an adaptation application object and the virtual reality object;
a virtual reality rendering module configured to obtain virtual reality object modification data, the virtual reality object modification data including modified sub-virtual reality object data of a sub-virtual reality object involved in the modification and/or additional virtual reality object data of an additional virtual reality object, the virtual reality object modification data being used to render or display a modification result of the virtual reality object.
In a twelfth aspect, an embodiment of the present disclosure provides an electronic device, including a memory and a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method steps as described above.
In a thirteenth aspect, embodiments of the present disclosure provide a computer-readable storage medium including a memory and a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method steps as described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the technical scheme, the display object is divided into the sub-display objects, and the sub-display objects are displayed according to the sub-display object data, so that the display of the display object is realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of embodiments of the disclosure.
Drawings
Other features, objects, and advantages of embodiments of the disclosure will become more apparent from the following detailed description of non-limiting embodiments when taken in conjunction with the accompanying drawings. In the drawings:
FIG. 1A illustrates an exemplary application scenario diagram in accordance with an embodiment of the present disclosure;
FIG. 1B shows a flow diagram of an object processing method according to an embodiment of the present disclosure;
FIG. 1C shows a schematic diagram of an exemplary display object and sub-display object, in accordance with embodiments of the present disclosure;
FIG. 1D illustrates a schematic diagram of an exemplary display object and sub-display object in accordance with an embodiment of the present disclosure;
FIG. 1E illustrates a schematic diagram of a scenario in which a client interacts with a server, according to an embodiment of the present disclosure;
FIG. 2 illustrates an exemplary chartlet schematic in accordance with an embodiment of the present disclosure;
FIG. 3 shows a flow diagram of an object processing method according to an embodiment of the present disclosure;
FIG. 4 illustrates a flow diagram of a virtual object processing method according to an embodiment of the present disclosure;
fig. 5 shows a block diagram of the structure of an object processing apparatus according to an embodiment of the present disclosure;
fig. 6 shows a block diagram of the structure of an object processing apparatus according to an embodiment of the present disclosure;
FIG. 7 shows a block diagram of a virtual object processing apparatus according to an embodiment of the present disclosure;
FIG. 8 shows a block diagram of a client according to an embodiment of the present disclosure;
FIG. 9 shows a block diagram of a server according to an embodiment of the present disclosure;
fig. 10 shows a schematic structural diagram of a computer system suitable for implementing an object processing method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, exemplary embodiments of the disclosed embodiments will be described in detail with reference to the accompanying drawings so that they can be easily implemented by those skilled in the art. Also, for the sake of clarity, parts not relevant to the description of the exemplary embodiments are omitted in the drawings.
In the disclosed embodiments, it is to be understood that terms such as "including" or "having," etc., are intended to indicate the presence of the disclosed features, numbers, steps, behaviors, components, parts, or combinations thereof, and are not intended to preclude the possibility that one or more other features, numbers, steps, behaviors, components, parts, or combinations thereof may be present or added.
It should be further noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. The embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
As mentioned above, with the development of scientific technology and network technology, the use of network data transmission-based multiparty interactive applications and application objects is becoming more and more widespread, such as large multiparty interactive games and the like. In multi-party interactive application, a plurality of participants are expected to see the operation results of the participants and the operation results of other participants at the same time, so that the interaction efficiency, the real-time performance and the accuracy are improved, and higher requirements are provided for the real-time display and the update of an operation object.
Fig. 1A illustrates an exemplary application scenario diagram according to an embodiment of the present disclosure.
As shown in fig. 1A, for an indoor gun game that may allow multiple player users, in this game scenario, there are multiple player users who are camping or opposing and building facilities that may be broken walls (similarly, building facilities may also include some or all of, for example, support posts, floors, and ceilings), and each player user may use a gun to attack an opposing player, break the indoor building facility to create a passageway, new route of attack, or defensive area to accommodate passage of the player user. For example, there are three player users in the game scenario: the player users 1, 2 and 3 are collocated users, the player users 1 and 2 and the player users 3 are opposite users, and all three player users use different walls as shelters. In order to enable each player user to see the operation result of the player user and simultaneously see the operation results of other player users, and improve the interactivity of the game, in the prior art, the player user who generates the operation synchronizes the operation data to other player users, so that the other player users respectively perform calculation, display or other subsequent processing on the operation results in respective scenes according to the received operation data. However, in a large interactive game, the operation data of different player users is generated frequently, the amount of scene data related to the operation data is large, and if the update speed of the display screen on the client of the player user cannot keep pace with the update speed of the display screen on the client of the player user, a situation occurs in which the game is stuck or the scene viewed by another player is not synchronized, so that the player cannot normally participate in the game.
The embodiment of the disclosure provides an object processing method, which includes: acquiring display object data, wherein the display object data comprises sub-display object data of a sub-display object obtained by dividing the display object; and displaying the display object according to the display object data.
According to the technical scheme, the display object is divided into the sub-display objects, and the sub-display objects are displayed according to the sub-display object data, so that the display of the display object is realized.
Fig. 1B illustrates a flow diagram of an object processing method according to an embodiment of the present disclosure. According to an embodiment of the present disclosure, the method may be performed by a client, for example.
As shown in fig. 1B, the object processing method according to the embodiment of the present disclosure includes step S101 and step S102.
In step S101, display object data including sub-display object data of a sub-display object obtained by dividing a display object is acquired.
In step S102, the display object is displayed according to the display object data.
FIG. 1C shows a schematic diagram of an exemplary display object and a sub-display object, according to an embodiment of the present disclosure.
As shown in fig. 1C, the display object is a floor 110, and the sub-display object is a floor panel 111 obtained by dividing the floor. The character 120 may interact with the floor 110, such as by shooting a gun on the floor, throwing a grenade, releasing a shock wave, laser, digging with a spade pick, prying with a crowbar, touching with a limb (such as a foot-kick, hand-push, moving, etc.), etc., to break the floor 110. For example, the floor boards 111 in the floor 110 are broken by bullets, bombs, lasers, magic, limb contact, shovel pick type tools, crow bars type tools, etc. of the character 120 to form the holes 112. The holes 112 are unobstructed and allow the character 120 or portions thereof to pass through, such as limbs, bullets, bombs, lasers, magic, tools, etc.
FIG. 1D illustrates a schematic diagram of an exemplary display object and sub-display object in accordance with an embodiment of the present disclosure.
As shown in fig. 1D, the display object is a wall 130, and the sub-display object is a wall block 131 obtained by dividing the wall 130. The character 140 may interact with the wall 130, such as by shooting a gun on the wall, throwing a grenade, releasing a shock wave, laser, digging with a spade pick, prying with a crowbar, touching with a limb (such as a foot-kick, hand-push, moving, etc.), etc., to break the wall 130. For example, the wall block 131 in the wall 130 is broken by bullets, bombs, lasers, magic, limb contact, spade pick type tools, crowbar type tools, etc. of the character 140 to form the hole 132. The holes 132 are unobstructed and allow the character 140 or a portion thereof to pass through, such as a limb, bullet, bomb, laser, magic, tool, etc.
According to an embodiment of the present disclosure, a display object is divided into sub display objects, and by displaying the sub display objects based on sub display object data, display of the display object can be achieved. When a display object is changed, for example, destroyed, the sub-display object involved in the change can be updated by updating the sub-display object data of the sub-display object involved in the change, so as to update the display object. In this way, the whole display object data does not need to be updated every time the display object changes, so that the calculation load is obviously reduced, the resource overhead is reduced, the real-time performance of display is improved, and the interaction efficiency, the real-time performance and the accuracy are improved under the multi-party interaction application scene.
According to the embodiment of the disclosure, the display object can be locally segmented at the client to obtain the sub-display object data of the sub-display object. For example, in a single client application, the display object may be segmented locally at the client, resulting in sub-display object data for the sub-display object. In a multi-client interaction scene, the display object can be locally segmented at the client to obtain sub-display object data of the sub-display object, so that the operation load of the server is reduced.
According to the embodiment of the disclosure, the sub-display object data of the sub-display object can be acquired from the server. For example, the display object may be segmented at the server side to obtain sub-display object data of the sub-display object, and then the sub-display object data is sent to the client side. Alternatively, the server may obtain pre-generated sub-display object data and then send the sub-display object data to the client. The method can reduce the operation load of the client and is suitable for a multi-client interaction scene.
According to an embodiment of the present disclosure, the segmenting the display object includes: and segmenting the display object according to a preset mode. For example, the display object may be divided according to the position and/or shape of a preset sub-display object. Compared with a real-time segmentation mode, the method has the advantages that the performance overhead can be obviously reduced by segmenting the display object in advance, so that the scene synchronization of each client is simpler, and the real-time performance is stronger. For example, in a network game with multi-party interaction, it is necessary to ensure synchronization of the wall shape between a plurality of clients. The data of the grid used in the game engine is of a floating point type, and the consistency of the calculation results of different clients cannot be guaranteed. Therefore, if the partition is performed in real time, the shapes of the wall seen by different clients are likely to be different unless the grid of the whole wall is synchronized, but the data size of the grid is large, so that much pressure is applied to the network, and the pre-partition only needs to synchronize the wall ID and the wall block ID between the clients, so that the data size is small and the consistency is easily ensured.
According to an embodiment of the present disclosure, the segmenting the display object includes: and determining the position related to the interaction of the display object according to the interaction data of the interaction application object and the display object or the corresponding interaction body of the display object, and segmenting the display object according to the position related to the interaction of the display object.
According to the embodiment of the present disclosure, the position involved in the interaction may be a direct acting position of the interaction applying object on the display object (or the corresponding interactive body of the display object), or may be a position where the display object (or the corresponding interactive body of the display object) is affected by the interaction.
For example, when the interaction application object is a bullet and the display object is a wall, the position involved in the interaction may be a landing point of the bullet on the wall or a region of the wall where the impact of the bullet reaches.
According to the embodiment of the disclosure, the display object may have one or more corresponding interactive bodies, the interaction between the display object and the interaction application object may be replaced according to the interaction between the corresponding interactive body and the interaction application object, and the display object may be segmented according to the interaction between the corresponding interactive body and the interaction application object. For example, when the display object is a wall, the corresponding interactive body may be a weight bearing column in the wall, and when the display object is a floor, the corresponding interactive body may be a keel of the floor.
The corresponding interaction body is arranged, so that the jitter caused when the character interacts (for example, collides) with the display object can be reduced. For example, when a wall is divided into a plurality of wall blocks, a character may collide with the wall blocks when colliding with the wall, thereby causing abnormal shaking of a screen and unnecessary deterioration of a display effect. By setting fewer corresponding interaction bodies, only when the role interacts with the corresponding interaction bodies (for example, when the role contacts the display object and the advancing path or the extending line of the advancing path of the role intersects with the corresponding interaction bodies), the role is considered to interact with the display object and the display object is divided according to the position related to the interaction, so that the calculation load can be effectively reduced, and abnormal shaking of the picture can be avoided.
According to the embodiment of the present disclosure, the sub-display object data of the sub-display object involved in the interaction may be determined according to the interaction data of the interaction application object and the display object or the corresponding interaction body of the display object. For example, the interaction data comprises at least one or more of: interactive operation type, interactive operation angle, interactive operation speed, interactive operation strength and attributes of the interactive application object.
According to the embodiment of the disclosure, the influence of different interaction data on the display object may be different, for example, digging and smashing belong to different interaction types, the positions of the related display objects may be different, and the damage degree to the display objects may also be different, so that the positions related to the interaction may be determined according to the different interaction types, and further, the segmentation mode and the sub-display object data of the sub-display object related to the interaction may be determined.
In addition, when any one of the angle of interaction, the speed of interaction, the strength of interaction, and the property of the interaction application object (for example, different weapons) is different, the position on the display object concerned may be different, and the degree of damage to the display object may be different. For example, the location areas on the wall surface involved by a grenade thrown perpendicular to the wall surface and a grenade thrown obliquely to the wall surface may be different, a grenade thrown obliquely to the wall surface may involve a larger wall location area, and the level of damage caused by a grenade thrown perpendicular to the wall surface may be greater. For example, a hammer hitting a wall surface at the same angle of inclination, with a high speed or force, may cause a larger wall area and more severe damage. For example, when laser light and magic are applied to a wall surface, the location and level of damage to the wall surface involved may vary, and the area of the location of the wall surface involved in magic may be larger and may cause more serious damage.
Therefore, the position related to the interaction can be determined according to different interaction data, and further the segmentation mode and the sub-display object data of the sub-display object related to the interaction can be determined.
According to an embodiment of the present disclosure, the method further comprises: acquiring display object update data, wherein the display object update data comprises update sub-display object data of a designated sub-display object and/or additional display object data of an additional display object, and the display object update data is used for rendering or displaying the update display object.
According to an embodiment of the present disclosure, the designated sub display object may be a sub display object in which a change occurs in a sub display object of the display object, and the additional display object may be a newly added display object other than the display object. For example, in a scene of a gunshot wall, the designated sub-display object may be a broken or knocked-down wall block, such as a wall block where an interaction direct action point (e.g., a contact point of a shrapnel with a wall surface) is located and a wall block which is affected by the interaction although the interaction direct action point is not included. For example, if a wall block is not hit directly by the elastic sheet, but all the wall blocks around the wall block are hit by the elastic sheet, the wall block may also fall, that is, the wall block is also referred to as a stator display object. The additional display object can be a wall block falling on the ground, or a fragment obtained after the broken wall block is broken, or a damaged pattern around the wall hole and on the hole wall, and is used for beautifying the hole and the hole wall and vividly simulating the real appearance of the damage (hereinafter referred to as a 'map').
Fig. 1E shows a schematic diagram of a scenario in which a client interacts with a server according to an embodiment of the present disclosure.
As shown in fig. 1E, the client 160 may be a mobile terminal such as a cell phone, PDA, tablet computer, or may be a data processing device such as a desktop computer, notebook computer, or the like. The client 160 may display the display object. The server 150 is used to receive data from the client 160 and send data to the client 160. The server 150 may be a server or a server cluster. For example, the server 150 is a cloud server.
In the scenario shown in fig. 1E, the processing to be performed by the client 160 may vary according to the actual situation. For example, in one case, the client 160 may locally segment the display object to obtain sub-display object data of the sub-display object, and may render and finally display the display object according to an interaction result between the interaction application object and the display object. In this case, the client 160 may transmit only the interactive operation data of the user, the division manner of the display object, and the ID of the sub display object that is not displayed or changes the display manner due to the interactive result to the server 150. At this time, the server 150 synchronously transmits interactive operation data of the user from one or more clients 160, a division manner of the display object, and IDs of sub-display objects that are not displayed or change the display manner due to the interactive result to the respective clients 160 for a certain period. In this case, the data processing amount of the client 160 is large, the data processing amount of the server 150 is small, and the data transmission density between the client 160 and the server 150 is relatively small.
In another case, the server 150 may locally segment the display object to obtain sub-display object data of the sub-display object, and may also render the display object according to an interaction result between the interaction application object and the display object, and only send data that only needs to be displayed after rendering to the client 160 for display. In this case, the client 160 only needs to upload the user's interactive operation data to the server 150, and the server 150 locally performs all processing according to the user's interactive operation data from one or more clients 160 in a certain period, and transmits display object data that only needs to be displayed on the client 160 to the client 160. In this case, the data processing amount of the client 160 is small, the data processing amount of the server 150 is large, and the data transmission density between the client 160 and the server 150 is relatively large.
The above two cases describe the case where the client 160 undertakes a larger amount of data processing work and the server 150 undertakes a larger amount of data processing work, respectively. It will be appreciated that the data processing tasks may be distributed between the client 160 and the server 150 in a variety of ways, depending on limitations such as data processing capabilities of the client 160, data communication capabilities of the client 160 and the server 150, and so forth. For example, the rendering process of the entire display object may be completed at the server 150, and the rendering process of the destroyed sub-display object may be completed at the client 160. For example, the segmentation of the display objects may be done at the server 150, while the rendering of the display objects is done at the client 160. It should be noted that the above description is merely an example, and those skilled in the art will appreciate that the data processing tasks may be distributed between the client 160 and the server 150 in a variety of ways as desired.
According to the embodiment of the present disclosure, the segmentation result of the display object may be sent from any client 160 to the server 150, and then sent from the server 150 to other clients 160. Alternatively, the segmentation of the display object may be performed at the server 150 and the segmentation results sent to the client 160.
According to the embodiment of the present disclosure, the interaction data may be transmitted from the one or more clients 160 to the server 150, the server 150 generates complete rendering or display data of the updated display object according to the interaction data, and transmits the complete rendering or display data of the updated display object to the one or more clients 160, so that the clients 160 implement display of the updated display object according to the complete rendering or display data.
According to the embodiment of the present disclosure, the interaction data may be sent from the one or more clients 160 to the server 150, the server 150 generates incremental rendering or display data for updating the display object according to the interaction data, and sends the incremental rendering or display data for updating the display object to the one or more clients 160, so that the clients 160 implement display of the updated display object according to the incremental rendering or display data.
According to an embodiment of the present disclosure, the interaction data may be sent from the one or more clients 160 to the server 150, and the server 150 generates summarized interaction data from the interaction data and sends the summarized interaction data to the one or more clients 160, so that the client 160 generates complete rendering or display data of the updated display object from the summarized interaction data or generates incremental rendering or display data of the updated display object to implement display of the updated display object.
FIG. 2 illustrates an exemplary map view according to an embodiment of the present disclosure.
According to the embodiment of the disclosure, the map comprises a small map and a large map, wherein the small map is smaller in size and is attached to the corner of the hole, the large map is larger in size and covers the whole hole, as shown in fig. 2, the thick solid line in the figure represents the hole wall 210, the large map 211 covers the whole hole, and the small map 212 is attached to the corner of the hole.
According to an embodiment of the present disclosure, at least one or more of the following of the designated child display objects is changed by the updated display object compared to the display object: color, shape, transparency, texture, pattern, material.
According to an embodiment of the present disclosure, the additional display object data is used to determine at least one or more of the following for the additional display object: position, color, shape, transparency, texture, pattern, material.
According to an embodiment of the present disclosure, at least one or more of the following may be determined from interaction data of an interaction application object with the display object or with a corresponding interaction body of the display object to generate the display object update data: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
According to an embodiment of the present disclosure, wherein the interaction data comprises at least one or more of: interactive operation type, interactive operation angle, interactive operation speed, interactive operation strength and attributes of the interactive application object.
As described above, according to the embodiments of the present disclosure, the position related to the interaction of the display object may be determined according to the interaction data of the interaction application object and the display object or the corresponding interaction body with the display object, and the sub-display objects may be used as the designated sub-display objects by determining which sub-display objects are changed according to the position. At the same time, the updated sub-display object data of the designated sub-display object may be determined according to the interaction data, such as the degree of damage (e.g., whether it is knocked off or broken or deformed), or a change in color or transparency or texture or pattern (e.g., in a magic scene, it may be changed from one color to another color or transparent, or to another texture, to another pattern, etc.), or a change in material (e.g., from being able to block the passage of an object to not block the passage of an object). The updated sub-display object data may be used to describe one or more of the above-described changed characteristics of the designated sub-display object, and the updated designated sub-display object is displayed according to the updated sub-display object data, thereby displaying the updated display object.
According to embodiments of the present disclosure, the material of the sub-display object may be used with the interaction data to determine the impact of the interaction on the sub-display object. For example, the sub display object may have a material of "stone", and after the sub display object is knocked off, the material of the sub display object may be set to "air" or "no material", so that when there is no interaction application object interacting with the sub display object, the sub display object may be considered to be absent, i.e., not colliding with the interaction application object.
According to the embodiment of the disclosure, a physical scene may be constructed for the sub-display object, wherein the sub-display object may have a geometric outline of a polyhedron, and whether it interacts with the interaction application object may be determined according to the geometric outline. For example, when the interaction applying object is a bullet, it may be determined whether the sub display object is changed by the interaction, that is, whether the sub display object should be treated as the designated sub display object, according to whether the bullet intersects the polyhedron of the sub display object when contacting the display object. And after the sub display object is knocked off, deleting the polyhedron of the sub display object from the physical scene, wherein after the deletion, the sub display object can not interact with the interaction application object any more, and a user can observe the scene on the other side of the display object through the hole formed after the sub display object is knocked off.
According to an embodiment of the present disclosure, an additional display object and additional display object data may be determined according to interaction data of an interaction application object and the display object or a corresponding interaction body with the display object.
According to the embodiment of the disclosure, the additional display objects may be generated interactively, such as a hole wall map, a stone block falling on the ground, and the like, different interaction results, such as a fragmentation condition of a wall block, a broken stone splash condition, and the like, may be determined according to the interaction data, and then different additional display objects, such as different sizes, textures, maps, different stone positions, colors, numbers, shapes, transparencies, textures, patterns, textures (for example, whether the diagonal behavior is obstructed or not), and the like, are determined.
The appointed sub-display object and the additional display object are determined according to the interactive data, and the appointed sub-display object and the additional display object corresponding to different interactive data can be distinguished, so that a scene of a real physical world is better simulated.
According to an embodiment of the present disclosure, the determining, according to interaction data of an interaction application object with the display object or with a corresponding interaction body of the display object, at least one or more of the following to generate the display object update data: the designated child display object, the updated child display object data of the designated child display object, the additional display object data, comprising: determining, from interaction data of interaction enforcement objects from one or more clients with the display object or with corresponding interactors of the display object, at least one or more of: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
According to an embodiment of the present disclosure, one or more clients may send interaction data of an interaction application object and the display object or a corresponding interactor with the display object to a server, and the server determines, according to the interaction data from the one or more clients, at least one or more of the following: the designated sub-display object, the updated sub-display object data of the designated sub-display object, the additional display object, and the additional display object data can reduce the computational pressure of the client, facilitate the monitoring of the effectiveness of the interaction, and identify invalid operations or cheating behaviors.
According to the embodiment of the disclosure, a plurality of clients synchronously receive the display object update data. Therefore, the consistency of the display pictures among the clients can be ensured, and the inconsistency of the display pictures caused by self updating of the display objects of the clients is avoided.
According to an embodiment of the present disclosure, the determining, according to interaction data of an interaction application object with the display object or with a corresponding interaction body of the display object, at least one or more of the following to generate the display object update data: the designated child display object, the updated child display object data of the designated child display object, the additional display object data, comprising: according to interaction data of an interaction application object and the display object or an interaction body corresponding to the display object sent by a server from one or more clients, determining at least one or more of the following items: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
According to the embodiment of the disclosure, the server may issue interaction data of an interaction application object from one or more clients and the display object or an interaction body corresponding to the display object to the client, so that the client may determine at least one or more of the following according to the interaction data: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data. The implementation mode can reduce the data volume transmitted between the server and the client and reduce the pressure of a communication network. The calculation is distributed to the client side, so that the calculation pressure of the server side can be reduced, and each client side can flexibly determine at least one or more of the following items according to the software and hardware configuration of the client side: the appointed sub-display object, the updated sub-display object data of the appointed sub-display object, the additional display object and the additional display object data improve the personalized adaptation degree of the display effect and the client performance.
According to an embodiment of the present disclosure, the display object update data includes whole rendering data of the updated display object or whole display data of the updated display object; or the display object update data comprises incremental rendering data or incremental display data of the updated display object compared to the display object.
According to an embodiment of the present disclosure, the display object update data includes whole rendering data of the updated display object, and the updated display object is rendered and displayed according to the whole rendering data, for example, the updated display object may include a sub-display object that does not change and a designated sub-display object that changes in the display object, and the whole rendering data may include sub-display object data of the sub-display object that does not change and updated sub-display object data of the designated sub-display object that changes, and the updated display object may be displayed by rendering and displaying according to the whole rendering data. Due to the fact that integral rendering is carried out, specific sub-display objects are not distinguished and refer to the sub-display objects, and algorithm and program implementation are simple.
According to an embodiment of the present disclosure, the display object update data includes whole display data of the updated display object, and the updated display object is displayed according to the whole display data. Because the updated display object is displayed integrally, the specific sub-display objects are not distinguished and are the sub-display objects, and the algorithm and the program are simpler to realize.
According to the embodiment of the disclosure, the display object update data comprises the incremental rendering data of the updated display object compared with the display object, and rendering is performed according to the incremental rendering data, so that the data transmission amount and the calculation amount can be reduced, the rendering speed is increased, and the real-time performance of display is improved.
According to the embodiment of the disclosure, the display object update data comprises the incremental display data of the updated display object compared with the display object, and the display is performed according to the incremental display data, so that the data transmission amount and the calculation amount can be reduced, the rendering speed is improved, and the real-time performance of the display is improved.
According to an embodiment of the present disclosure, the acquiring display object update data includes: and acquiring the display object updating data from a server side or locally acquiring the display object updating data from a client side, and displaying the updated display object according to the display object updating data.
According to an embodiment of the present disclosure, the acquiring display object data includes: and acquiring display object data from a server, wherein the display object data comprises display data used for displaying the rendered display object. By rendering the display object at the server and sending the rendered display data to the client, the client can directly display according to the display data without performing rendering operation, so that the calculation load of the client can be reduced, and the display real-time performance and the display fluency of the server are improved.
According to an embodiment of the present disclosure, the acquiring display object data further includes: generating a rendering mesh for rendering a display object according to a segmentation result of the display object, wherein the displaying the display object according to the display object data comprises: and rendering the display object according to the rendering grid.
For example, the rendering mesh may be a mesh in the floor 110 in fig. 1C and a mesh in the wall 130 in fig. 1D. By rendering in grid units, only the changed grid can be re-rendered without re-rendering the unchanged grid when the display object is changed, or the previous rendering result can be directly reused for the unchanged grid, so that the rendering speed can be remarkably improved, and the calculation load can be reduced.
According to an embodiment of the present disclosure, rendering a display object according to a rendering grid includes: rendering the display object based on the vertex of the surface layer of the display object in the rendering mesh according to the condition that no sub-display object which is not displayed exists in the display object.
For example, when the display object is not broken, there is no sub-display object that is not displayed, that is, all sub-display objects are displayed, and the display object may be rendered for a surface vertex of a rendering network of the display object, so as to reduce the amount of rendering calculation and accelerate the rendering speed.
According to an embodiment of the present disclosure, the generating a rendering mesh for rendering a display object according to a segmentation result of the display object further includes: determining a sub-display object related to the interaction according to the interaction data of the interaction application object and the display object or the interaction body corresponding to the display object and the segmentation result of the display object; and modifying the index information of the grids of the sub-display objects related to the interaction in the rendering grids according to the sub-display objects related to the interaction.
According to an embodiment of the present disclosure, when a display object changes due to an interaction with an interaction application object, a sub display object involved in the interaction (e.g., a sub display object that changes due to the interaction, such as a wall block that is knocked off or broken) may be determined, and then index information of a mesh of the sub display objects involved in the interaction in the rendering mesh is modified according to the sub display object involved in the interaction. For example, when the display object is not broken, the index information of the mesh of the sub-display object may include only the vertex of the surface layer, and when the display object is broken, the sub-display object at the broken portion may need to display the surface other than the surface layer.
According to an embodiment of the present disclosure, the modifying the index information of the mesh of the sub-display object involved in the interaction in the rendering mesh includes: modifying index information of a mesh of a sub-display object involved in an interaction in the rendering mesh such that an inner wall of a gap or hole through the three-dimensional display object is rendered in the rendering mesh of the sub-display object that is not being displayed.
According to an embodiment of the present disclosure, the structure of the three-dimensional display object may be complicated. For convenience of explanation, taking the display object as a wall as an example, when the wall is not damaged, only two pieces (one piece before and after) need to be displayed. After the single-layer wall is damaged, two pieces with holes and a hole wall need to be displayed. After the double-layer wall is damaged, four sheets with holes (two of which are outer sheets and are consistent with the single-layer wall, and the other two sheets are positioned at two sides of the hollow layer of the wall) and two hole walls (two layers are respectively provided with one hole wall) need to be displayed. For a single wall, all vertices of two patches are stored in the rendering mesh, and the number and positions of the vertices are consistent with the points in the segmentation map. For a double-layer wall, all the vertices of 4 patches are stored in the rendering mesh, and the number and positions of the vertices are also consistent with the points in the segmentation graph. When the wall is not broken, the index of the rendering mesh only uses the vertices of the surface layer. After the wall body is damaged, the index information in the grids is modified to realize the effect that the wall body is provided with holes and the inner part is provided with a hole wall, so that the shape of the wall body can be rapidly modified.
According to the embodiment of the disclosure, the display object is a three-dimensional display object with a multilayer structure, and the corresponding interaction bodies are arranged between the layers of the three-dimensional display object. For example, the display object is a double-layer wall, and the corresponding interaction bodies of the double-layer wall, such as the bearing columns described above, can be arranged in the hollow layer of the wall.
According to an embodiment of the present disclosure, rendering a display object according to a rendering grid includes: and performing mapping rendering on the grids of the sub-display objects involved in the interaction in the rendering grids. For example, the sub-display objects involved in the interaction may be rendered through a preset pattern (referred to as a "map") so that the sub-display objects involved in the interaction exhibit a desired appearance.
According to an embodiment of the present disclosure, the performing a map rendering on a grid of sub-display objects involved in an interaction in the rendering grid includes: and utilizing at least one map to map and render the boundary of the grid of the sub-display object related to the interaction in the rendered grid and the area adjacent to the boundary.
According to the embodiment of the disclosure, the map comprises a small map and a large map, wherein the small map is smaller in size and is attached to the corner of the hole, the large map is larger in size and covers the whole hole, as shown in fig. 2, the thick solid line in the figure represents the hole wall 210, the large map 211 covers the whole hole, and the small map 212 is attached to the corner of the hole. Through the mapping, the sub-display object related to the interaction can be beautified, and the unnatural and obtrusive feeling at the junction between the sub-display object related to the interaction and other sub-display objects is avoided. Those skilled in the art can understand that according to the teachings of the embodiment of the present disclosure, one, three, or more maps may also be used to perform map rendering on the edge of the grid of the sub-display object involved in the interaction in the rendering grid and the area adjacent to the edge, and this disclosure does not describe details for implementation.
According to an embodiment of the present disclosure, the performing a map rendering on a grid of sub-display objects involved in an interaction in the rendering grid includes: clearing current chartlet template data of the rendering grid; rendering an object template of the current display object; and according to the object template, performing mapping rendering on the grids of the sub-display objects related to the interaction in the rendering grids.
According to the embodiment of the present disclosure, before performing the map rendering, the current map template data is first cleared, then the object template of the current display object is rendered, for example, the object template of the current display object is rendered according to the display object data of the current display object, a damaged portion is displayed in the rendered object template, and then, the mesh of the sub-display object involved in the interaction in the rendering mesh is rendered according to the object template, for example, the mesh rendering is performed on the damaged portion.
According to an embodiment of the present disclosure, the sub-display object has a sub-display object collision volume for determining an interaction effect of the sub-display object and an interaction applying object, wherein the generating a rendering mesh for rendering a display object according to a segmentation result of the display object further comprises: modifying index information of a mesh of a sub-display object involved in an interaction in the rendering mesh to delete the sub-display object collision volume, or to set the sub-display object collision volume as unavailable, or to set a volume of the sub-display object collision volume as 0, according to interaction data of an interaction applying object and the sub-display object collision volume.
According to embodiments of the present disclosure, an interaction effect of a sub-display object with an interaction application object may be determined by a sub-display object collision volume, which may have the same geometric topology and location as the sub-display object, while having an attribute value, such as collision volume, or collision volume availability, for determining the interaction effect. When the interaction of the interaction applying object with the display object acts on the sub display object, the interaction effect may be determined according to the attribute value of the collision volume of the sub display object. For example, the collision volume of a child display object collision volume may be used to determine that the child display object is knocked off or broken or deformed by an interaction, the larger the collision volume, the more likely it is to have a collision effect with the interaction applying object, i.e., the more susceptible it is to the interaction. When the collision volume is 0 or the collision volume availability is set to be unavailable, no blocking effect is generated for the interaction, and the corresponding sub-display object is no longer displayed or is displayed to be transparent corresponding to the fact that the sub-display object collision volume is deleted.
According to an embodiment of the present disclosure, rendering the display object according to the rendering grid includes at least one of: applying a visual basis characteristic to the rendering mesh; applying texture characteristics to the render mesh; applying a lighting characteristic to the rendering mesh.
According to an embodiment of the present disclosure, the visual basic characteristics include shape, roughness, etc., the texture characteristics contain color information, and the illumination characteristics include ambient light information, diffuse reflection information, etc.
Fig. 3 shows a flow diagram of an object handling method according to an embodiment of the present disclosure, which may be performed, for example, by a server.
As shown in fig. 3, an object processing method includes:
step S301, obtaining display object data, wherein the display object data comprises sub-display object data of sub-display objects obtained by dividing the display objects;
step S302, sending the display object data to a client for displaying the display object.
According to the technical scheme, the display object is divided into the sub-display objects, and the sub-display objects are displayed according to the sub-display object data, so that the display of the display object is realized.
According to the embodiment of the disclosure, the display object can be locally segmented at the server side to obtain the sub-display object data of the sub-display object. For example, the display object may be segmented at the server side to obtain sub-display object data of the sub-display object, and then the sub-display object data is sent to the client side. Alternatively, the server may obtain pre-generated sub-display object data and then send the sub-display object data to the client. The method can reduce the operation load of the client and is suitable for a multi-client interaction scene.
According to the embodiment of the present disclosure, the sub-display object data of the sub-display object may be acquired from the client. For example, in a multi-client interaction scenario, the display object may be locally split at the client to obtain sub-display object data of the sub-display object, thereby reducing the computational load of the server.
According to an embodiment of the present disclosure, the segmenting the display object includes: and segmenting the display object according to a preset mode. For example, the display object may be divided according to the position and/or shape of a preset sub-display object. Compared with a real-time segmentation mode, the method has the advantages that the performance overhead can be obviously reduced by segmenting the display object in advance, so that the scene synchronization of each client is simpler, and the real-time performance is stronger. For example, in a network game with multi-party interaction, it is necessary to ensure synchronization of the wall shape between a plurality of clients. The data of the grid used in the game engine is of a floating point type, and the consistency of the calculation results of different clients cannot be guaranteed. Therefore, if the partition is performed in real time, the shapes of the wall seen by different clients are likely to be different unless the grid of the whole wall is synchronized, but the data size of the grid is large, so that much pressure is applied to the network, and the pre-partition only needs to synchronize the wall ID and the wall block ID between the clients, so that the data size is small and the consistency is easily ensured.
According to an embodiment of the present disclosure, the segmenting the display object includes: and determining the position related to the interaction of the display object according to the interaction data of the interaction application object and the display object or the corresponding interaction body of the display object, and segmenting the display object according to the position related to the interaction of the display object.
According to the embodiment of the present disclosure, the position involved in the interaction may be a direct acting position of the interaction applying object on the display object (or the corresponding interactive body of the display object), or may be a position where the display object (or the corresponding interactive body of the display object) is affected by the interaction.
For example, when the interaction application object is a bullet and the display object is a wall, the position involved in the interaction may be a landing point of the bullet on the wall or a region of the wall where the impact of the bullet reaches.
According to the embodiment of the disclosure, the display object may have one or more corresponding interactive bodies, the interaction between the display object and the interaction application object may be replaced according to the interaction between the corresponding interactive body and the interaction application object, and the display object may be segmented according to the interaction between the corresponding interactive body and the interaction application object. For example, when the display object is a wall, the corresponding interactive body may be a weight bearing column in the wall, and when the display object is a floor, the corresponding interactive body may be a keel of the floor.
The corresponding interaction body is arranged, so that the jitter caused when the character interacts (for example, collides) with the display object can be reduced. For example, when a wall is divided into a plurality of wall blocks, a character may collide with the wall blocks when colliding with the wall, thereby causing abnormal shaking of a screen and unnecessary deterioration of a display effect. By setting fewer corresponding interaction bodies, only when the role interacts with the corresponding interaction bodies (for example, when the role contacts the display object and the advancing path or the extending line of the advancing path of the role intersects with the corresponding interaction bodies), the role is considered to interact with the display object and the display object is divided according to the position related to the interaction, so that the calculation load can be effectively reduced, and abnormal shaking of the picture can be avoided.
According to the embodiment of the present disclosure, the sub-display object data of the sub-display object involved in the interaction may be determined according to the interaction data of the interaction application object and the display object or the corresponding interaction body of the display object. For example, the interaction data comprises at least one or more of: interactive operation type, interactive operation angle, interactive operation speed, interactive operation strength and attributes of the interactive application object.
According to the embodiment of the disclosure, the influence of different interaction data on the display object may be different, for example, digging and smashing belong to different interaction types, the positions of the related display objects may be different, and the damage degree to the display objects may also be different, so that the positions related to the interaction may be determined according to the different interaction types, and further, the segmentation mode and the sub-display object data of the sub-display object related to the interaction may be determined.
In addition, when any one of the angle of interaction, the speed of interaction, the strength of interaction, and the property of the interaction application object (for example, different weapons) is different, the position on the display object concerned may be different, and the degree of damage to the display object may be different. For example, the location areas on the wall surface involved by a grenade thrown perpendicular to the wall surface and a grenade thrown obliquely to the wall surface may be different, a grenade thrown obliquely to the wall surface may involve a larger wall location area, and the level of damage caused by a grenade thrown perpendicular to the wall surface may be greater. For example, a hammer hitting a wall surface at the same angle of inclination, with a high speed or force, may cause a larger wall area and more severe damage. For example, when laser light and magic are applied to a wall surface, the location and level of damage to the wall surface involved may vary, and the area of the location of the wall surface involved in magic may be larger and may cause more serious damage. For example, a tool such as a shovel pick or crowbar that excavates a wall surface at the same speed or force may affect a larger area of the wall surface excavating at a particular angle of inclination, resulting in more severe damage.
Therefore, the position related to the interaction can be determined according to different interaction data, and further the segmentation mode and the sub-display object data of the sub-display object related to the interaction can be determined.
According to an embodiment of the present disclosure, the method further comprises: acquiring display object update data, wherein the display object update data comprises update sub-display object data of a designated sub-display object and/or additional display object data of an additional display object, and the display object update data is used for rendering or displaying the update display object.
According to an embodiment of the present disclosure, the designated sub display object may be a sub display object in which a change occurs in a sub display object of the display object, and the additional display object may be a newly added display object other than the display object. For example, in a scene of a gunshot wall, the designated sub-display object may be a broken or knocked-down wall block, such as a wall block where an interaction direct action point (e.g., a contact point of a shrapnel with a wall surface) is located and a wall block which is affected by the interaction although the interaction direct action point is not included. For example, if a wall block is not hit directly by the elastic sheet, but all the wall blocks around the wall block are hit by the elastic sheet, the wall block may also fall, that is, the wall block is also referred to as a stator display object. The additional display object can be a wall block falling on the ground, or a fragment obtained after the broken wall block is broken, or a damaged pattern around the wall hole and on the hole wall, and is used for beautifying the hole and the hole wall and vividly simulating the real appearance of the damage (hereinafter referred to as a 'map').
FIG. 2 illustrates an exemplary map view according to an embodiment of the present disclosure.
According to the embodiment of the disclosure, the map comprises a small map and a large map, wherein the small map is smaller in size and is attached to the corner of the hole, the large map is larger in size and covers the whole hole, as shown in fig. 2, the thick solid line in the figure represents the hole wall 210, the large map 211 covers the whole hole, and the small map 212 is attached to the corner of the hole.
According to an embodiment of the present disclosure, at least one or more of the following of the designated child display objects is changed by the updated display object compared to the display object: color, shape, transparency, texture, pattern, material.
According to an embodiment of the present disclosure, the additional display object data is used to determine at least one or more of the following for the additional display object: position, color, shape, transparency, texture, pattern, material.
According to an embodiment of the present disclosure, at least one or more of the following may be determined from interaction data of an interaction application object with the display object or with a corresponding interaction body of the display object to generate the display object update data: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
According to an embodiment of the present disclosure, wherein the interaction data comprises at least one or more of: interactive operation type, interactive operation angle, interactive operation speed, interactive operation strength and attributes of the interactive application object.
As described above, according to the embodiments of the present disclosure, the position related to the interaction of the display object may be determined according to the interaction data of the interaction application object and the display object or the corresponding interaction body with the display object, and the sub-display objects may be used as the designated sub-display objects by determining which sub-display objects are changed according to the position. Also, updated sub-display object data specifying a sub-display object may be determined based on the interaction data by determining how the sub-display object changes, such as by being destroyed (e.g., being knocked out or broken or deformed), or by a change in color or transparency or texture or pattern (e.g., in a magic scene, from one color to another color or transparent, or to another texture, to another pattern, etc.), or by a change in texture (e.g., from stone to air or no material). The updated sub-display object data may be used to describe one or more of the above-described changed characteristics of the designated sub-display object, and the updated designated sub-display object is displayed according to the updated sub-display object data, thereby displaying the updated display object.
According to embodiments of the present disclosure, the material of the sub-display object may be used with the interaction data to determine the impact of the interaction on the sub-display object. For example, the sub display object may have a material of "stone", and after the sub display object is knocked off, the material of the sub display object may be set to "air" or "no material", so that when there is no interaction application object interacting with the sub display object, the sub display object may be considered to be absent, i.e., not colliding with the interaction application object.
According to the embodiment of the disclosure, a physical scene may be constructed for the sub-display object, wherein the sub-display object may have a geometric outline of a polyhedron, and whether it interacts with the interaction application object may be determined according to the geometric outline. For example, when the interaction applying object is a bullet, it may be determined whether the sub display object is changed by the interaction, that is, whether the sub display object should be treated as the designated sub display object, according to whether the bullet intersects the polyhedron of the sub display object when contacting the display object. And after the sub display object is knocked off, deleting the polyhedron of the sub display object from the physical scene, wherein after the deletion, the sub display object can not interact with the interaction application object any more, and a user can observe the scene on the other side of the display object through the hole formed after the sub display object is knocked off.
According to an embodiment of the present disclosure, an additional display object and additional display object data may be determined according to interaction data of an interaction application object and the display object or a corresponding interaction body with the display object.
According to the embodiment of the disclosure, the additional display objects may be generated interactively, such as a hole wall map, a stone block falling on the ground, and the like, different interaction results, such as a fragmentation condition of a wall block, a broken stone splash condition, and the like, may be determined according to the interaction data, and then different additional display objects, such as different sizes, textures, maps, different stone positions, colors, numbers, shapes, transparencies, textures, patterns, textures (for example, whether the diagonal behavior is obstructed or not), and the like, are determined.
The appointed sub-display object and the additional display object are determined according to the interactive data, and the appointed sub-display object and the additional display object corresponding to different interactive data can be distinguished, so that a scene of a real physical world is better simulated.
According to an embodiment of the present disclosure, the determining, according to interaction data of an interaction application object with the display object or with a corresponding interaction body of the display object, at least one or more of the following to generate the display object update data: the designated child display object, the updated child display object data of the designated child display object, the additional display object data, comprising: determining, from interaction data of interaction enforcement objects from one or more clients with the display object or with corresponding interactors of the display object, at least one or more of: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
According to an embodiment of the present disclosure, one or more clients may send interaction data of an interaction application object and the display object or a corresponding interactor with the display object to a server, and the server determines, according to the interaction data from the one or more clients, at least one or more of the following: the designated sub-display object, the updated sub-display object data of the designated sub-display object, the additional display object, and the additional display object data can reduce the computational pressure of the client, facilitate the monitoring of the effectiveness of the interaction, and identify invalid operations or cheating behaviors.
According to the embodiment of the disclosure, the server may synchronously send the display object update data to the plurality of clients. Therefore, the consistency of the display pictures among the clients can be ensured, and the inconsistency of the display pictures caused by self updating of the display objects of the clients is avoided.
According to an embodiment of the present disclosure, the display object update data includes whole rendering data of the updated display object or whole display data of the updated display object; or the display object update data comprises incremental rendering data or incremental display data of the updated display object compared to the display object.
According to an embodiment of the present disclosure, the display object update data includes whole rendering data of the updated display object, and the updated display object is rendered and displayed according to the whole rendering data, for example, the updated display object may include a sub-display object that does not change and a designated sub-display object that changes in the display object, and the whole rendering data may include sub-display object data of the sub-display object that does not change and updated sub-display object data of the designated sub-display object that changes, and the updated display object may be displayed by rendering and displaying according to the whole rendering data. Due to the fact that integral rendering is carried out, specific sub-display objects are not distinguished and refer to the sub-display objects, and algorithm and program implementation are simple.
According to an embodiment of the present disclosure, the display object update data includes whole display data of the updated display object, and the updated display object is displayed according to the whole display data. Because the updated display object is displayed integrally, the specific sub-display objects are not distinguished and are the sub-display objects, and the algorithm and the program are simpler to realize.
According to the embodiment of the disclosure, the display object update data comprises the incremental rendering data of the updated display object compared with the display object, and rendering is performed according to the incremental rendering data, so that the data transmission amount and the calculation amount can be reduced, the rendering speed is increased, and the real-time performance of display is improved.
According to the embodiment of the disclosure, the display object update data comprises the incremental display data of the updated display object compared with the display object, and the display is performed according to the incremental display data, so that the data transmission amount and the calculation amount can be reduced, the rendering speed is improved, and the real-time performance of the display is improved.
According to an embodiment of the present disclosure, the acquiring display object update data includes: and locally acquiring the display object updating data from a server or acquiring the display object updating data from a client.
According to an embodiment of the present disclosure, the acquiring display object data includes: and locally acquiring display object data from a server, wherein the display object data comprises display data used for displaying the rendered display object. By rendering the display object at the server and sending the rendered display data to the client, the client can directly display according to the display data without performing rendering operation, so that the calculation load of the client can be reduced, and the display real-time performance and the display fluency of the server are improved.
According to an embodiment of the present disclosure, the sending the display object data to a client for displaying the display object includes: transmitting only a portion of the display object data to the client that is different from display object data previously transmitted to the client. By only sending incremental data to the client, the data transmission amount can be reduced, communication resources are saved, and the interaction fluency is improved.
According to an embodiment of the present disclosure, the sending the display object data to a client for displaying the display object includes: sending display object data to a client for displaying the display object, wherein the display object data comprises display data for displaying the rendered display object. By rendering the display object at the server and sending the rendered display data to the client, the client can directly display according to the display data without performing rendering operation, so that the calculation load of the client can be reduced, and the display real-time performance and the display fluency of the server are improved.
According to an embodiment of the present disclosure, the acquiring display object data further includes: generating a rendering grid for rendering the display object according to the segmentation result of the display object; and rendering the display object according to the rendering grid.
For example, the rendering mesh may be a mesh in the floor 110 in fig. 1C and a mesh in the wall 130 in fig. 1D. By rendering in grid units, only the changed grid can be re-rendered without re-rendering the unchanged grid when the display object is changed, or the previous rendering result can be directly reused for the unchanged grid, so that the rendering speed can be remarkably improved, and the calculation load can be reduced.
According to an embodiment of the present disclosure, rendering a display object according to a rendering grid includes: rendering the display object based on the vertex of the surface layer of the display object in the rendering mesh according to the condition that no sub-display object which is not displayed exists in the display object.
For example, when the display object is not broken, there is no sub-display object that is not displayed, that is, all sub-display objects are displayed, and the display object may be rendered for a surface vertex of a rendering network of the display object, so as to reduce the amount of rendering calculation and accelerate the rendering speed.
According to an embodiment of the present disclosure, the generating a rendering mesh for rendering a display object according to a segmentation result of the display object further includes: determining a sub-display object related to the interaction according to the interaction data of the interaction application object and the display object or the interaction body corresponding to the display object and the segmentation result of the display object; and modifying the index information of the grids of the sub-display objects related to the interaction in the rendering grids according to the sub-display objects related to the interaction.
According to an embodiment of the present disclosure, when a display changes for an interaction due to an application object with the interaction, a sub-display object involved in the interaction (e.g., a sub-display object that changes due to the interaction, such as a wall block that is knocked off or broken) may be determined, and then index information of a grid of sub-display objects involved in the interaction in the rendering grid is modified according to the sub-display object involved in the interaction. For example, when the display object is not broken, the index information of the mesh of the sub-display object may include only the vertex of the surface layer, and when the display object is broken, the sub-display object at the broken portion may need to display the surface other than the surface layer.
According to an embodiment of the present disclosure, the modifying the index information of the mesh of the sub-display object involved in the interaction in the rendering mesh includes: modifying index information of a mesh of a sub-display object involved in an interaction in the rendering mesh such that an inner wall of a gap or hole through the three-dimensional display object is rendered in the rendering mesh of the sub-display object that is not being displayed.
According to an embodiment of the present disclosure, the structure of the three-dimensional display object may be complicated. For convenience of explanation, taking the display object as a wall as an example, when the wall is not damaged, only two pieces (one piece before and after) need to be displayed. After the single-layer wall is damaged, two pieces with holes and a hole wall need to be displayed. After the double-layer wall is damaged, four sheets with holes (two of which are outer sheets and are consistent with the single-layer wall, and the other two sheets are positioned at two sides of the hollow layer of the wall) and two hole walls (two layers are respectively provided with one hole wall) need to be displayed. For a single wall, all vertices of two patches are stored in the rendering mesh, and the number and positions of the vertices are consistent with the points in the segmentation map. For a double-layer wall, all the vertices of 4 patches are stored in the rendering mesh, and the number and positions of the vertices are also consistent with the points in the segmentation graph. When the wall is not broken, the index of the rendering mesh only uses the vertices of the surface layer. After the wall body is damaged, the index information in the grids is modified to realize the effect that the wall body is provided with holes and the inner part is provided with a hole wall, so that the shape of the wall body can be rapidly modified.
According to the embodiment of the disclosure, the display object is a three-dimensional display object with a multilayer structure, and the corresponding interaction bodies are arranged between the layers of the three-dimensional display object. For example, the display object is a double-layer wall, and the corresponding interaction bodies of the double-layer wall, such as the bearing columns described above, can be arranged in the hollow layer of the wall.
According to an embodiment of the present disclosure, rendering a display object according to a rendering grid includes: and performing mapping rendering on the grids of the sub-display objects involved in the interaction in the rendering grids. For example, the sub-display objects involved in the interaction may be rendered through a preset pattern (referred to as a "map") so that the sub-display objects involved in the interaction exhibit a desired appearance.
According to an embodiment of the present disclosure, the performing a map rendering on a grid of sub-display objects involved in an interaction in the rendering grid includes: and utilizing at least one map to map and render the boundary of the grid of the sub-display object related to the interaction in the rendered grid and the area adjacent to the boundary.
According to the embodiment of the disclosure, the map comprises a small map and a large map, wherein the small map is smaller in size and is attached to the corner of the hole, the large map is larger in size and covers the whole hole, as shown in fig. 2, the thick solid line in the figure represents the hole wall 210, the large map 211 covers the whole hole, and the small map 212 is attached to the corner of the hole. Through the mapping, the sub-display object related to the interaction can be beautified, and the unnatural and obtrusive feeling at the junction between the sub-display object related to the interaction and other sub-display objects is avoided. Those skilled in the art can understand that according to the teachings of the embodiment of the present disclosure, one, three, or more maps may also be used to perform map rendering on the edge of the grid of the sub-display object involved in the interaction in the rendering grid and the area adjacent to the edge, and this disclosure does not describe details for implementation.
According to an embodiment of the present disclosure, the performing a map rendering on a grid of sub-display objects involved in an interaction in the rendering grid includes: clearing current chartlet template data of the rendering grid; rendering an object template of the current display object; and according to the object template, performing mapping rendering on the grids of the sub-display objects related to the interaction in the rendering grids.
According to the embodiment of the present disclosure, before performing the map rendering, the current map template data is first cleared, then the object template of the current display object is rendered, for example, the object template of the current display object is rendered according to the display object data of the current display object, a damaged portion is displayed in the rendered object template, and then, the mesh of the sub-display object involved in the interaction in the rendering mesh is rendered according to the object template, for example, the mesh rendering is performed on the damaged portion.
According to an embodiment of the present disclosure, the sub-display object has a sub-display object collision volume for realizing an interaction effect of the sub-display object and an interaction applying object, wherein the generating a rendering mesh for rendering a display object according to a segmentation result of the display object further includes: modifying index information of a mesh of a sub-display object involved in an interaction in the rendering mesh to delete the sub-display object collision volume, or to set the sub-display object collision volume as unavailable, or to set a volume of the sub-display object collision volume as 0, according to interaction data of an interaction applying object and the sub-display object collision volume.
According to embodiments of the present disclosure, an interaction effect of a sub-display object with an interaction application object may be determined by a sub-display object collision volume, which may have the same geometric topology and location as the sub-display object, while having an attribute value, such as collision volume, or collision volume availability, for determining the interaction effect. When the interaction of the interaction applying object with the display object acts on the sub display object, the interaction effect may be determined according to the attribute value of the collision volume of the sub display object. For example, the collision volume of a child display object collision volume may be used to determine that the child display object is knocked off or broken or deformed interactively, with a collision volume being more susceptible to a collision effect, i.e., more susceptible to interaction, the larger the volume of the collision volume. When the collision volume is 0 or the collision volume availability is set to be unavailable, no blocking effect is generated for the interaction, and the corresponding child display object corresponding to the deletion of the child display object collision volume is no longer displayed or is displayed as transparent.
According to an embodiment of the present disclosure, rendering the display object according to the rendering grid includes at least one of: applying a visual basis characteristic to the rendering mesh; applying texture characteristics to the render mesh; applying a lighting characteristic to the rendering mesh.
According to an embodiment of the present disclosure, the visual basic characteristics include shape, roughness, etc., the texture characteristics contain color information, and the illumination characteristics include ambient light information, diffuse reflection information, etc.
Fig. 4 shows a flow chart of a virtual object processing method according to an embodiment of the present disclosure. As shown in fig. 4, the method includes:
step S401, obtaining virtual object data, wherein the virtual object data comprises sub virtual object data of sub virtual objects obtained by dividing virtual objects;
step S402, displaying the virtual object according to the virtual object data, or sending the virtual object data.
According to the embodiment of the present disclosure, the virtual object is displayed according to the virtual object data, for example, the virtual object may be displayed on a local display device or may be displayed on a remote device.
Fig. 5 shows a block diagram of the structure of an object processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 5, an object processing apparatus according to an embodiment of the present disclosure includes:
a first obtaining module 501 configured to obtain display object data, where the display object data includes sub-display object data of a sub-display object obtained by dividing a display object;
a first display module 502 configured to display the display object according to the display object data.
As shown in fig. 5, according to an embodiment of the present disclosure, the object processing apparatus further includes:
a first segmentation module 503 configured to segment the display object locally at the client to obtain sub-display object data of the sub-display object.
As shown in fig. 5, according to an embodiment of the present disclosure, the object processing apparatus further includes:
a second obtaining module 504, configured to obtain the sub-display object data of the sub-display object from the server.
According to an embodiment of the present disclosure, the segmenting the display object includes:
and segmenting the display object according to a preset mode.
According to an embodiment of the present disclosure, the segmenting the display object includes:
and determining the position related to the interaction of the display object according to the interaction data of the interaction application object and the display object or the corresponding interaction body of the display object, and segmenting the display object according to the position related to the interaction of the display object.
According to an embodiment of the present disclosure, the object processing apparatus further includes:
and determining the sub-display object data of the sub-display object related to the interaction according to the interaction data of the interaction application object and the display object or the corresponding interaction body of the display object.
According to an embodiment of the present disclosure, the interaction data comprises at least one or more of: interactive operation type, interactive operation angle, interactive operation speed, interactive operation strength and attributes of the interactive application object.
As shown in fig. 5, according to an embodiment of the present disclosure, the object processing apparatus further includes:
a third obtaining module 505 configured to obtain display object update data, the display object update data including updated sub-display object data specifying the sub-display object and/or additional display object data of the additional display object, the display object update data being used for rendering or displaying the updated display object.
According to an embodiment of the present disclosure, at least one or more of the following of the designated child display objects is changed by the updated display object compared to the display object: color, shape, transparency, texture, pattern, material.
According to an embodiment of the present disclosure, the additional display object data is used to determine at least one or more of the following for the additional display object: position, color, shape, transparency, texture, pattern, material.
According to an embodiment of the present disclosure, the object processing apparatus further includes:
determining, from interaction data of an interaction application object with the display object or with a corresponding interaction body of the display object, at least one or more of the following to generate the display object update data: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
According to an embodiment of the present disclosure, the interaction data comprises at least one or more of: interactive operation type, interactive operation angle, interactive operation speed, interactive operation strength and attributes of the interactive application object.
According to an embodiment of the present disclosure, the determining, according to interaction data of an interaction application object with the display object or with a corresponding interaction body of the display object, at least one or more of the following to generate the display object update data: the designated child display object, the updated child display object data of the designated child display object, the additional display object data, comprising:
determining, from interaction data of interaction enforcement objects from one or more clients with the display object or with corresponding interactors of the display object, at least one or more of: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
According to an embodiment of the present disclosure, the object processing apparatus further includes:
and a plurality of clients synchronously receive the display object updating data.
According to an embodiment of the present disclosure, the determining, according to interaction data of an interaction application object with the display object or with a corresponding interaction body of the display object, at least one or more of the following to generate the display object update data: the designated child display object, the updated child display object data of the designated child display object, the additional display object data, comprising:
according to interaction data of an interaction application object and the display object or an interaction body corresponding to the display object sent by a server from one or more clients, determining at least one or more of the following items: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
According to an embodiment of the present disclosure, the object processing apparatus, wherein:
the display object update data comprises whole rendering data of the updated display object or whole display data of the updated display object; or
The display object update data comprises incremental rendering data or incremental display data of the updated display object compared to the display object.
According to an embodiment of the present disclosure, the object processing apparatus, wherein:
the acquiring of the display object update data includes: and acquiring the display object updating data from a server side or locally acquiring the display object updating data from a client side.
According to an embodiment of the present disclosure, the object processing apparatus further includes: and displaying the updated display object according to the display object updating data.
According to an embodiment of the present disclosure, the acquiring display object data includes:
and acquiring display object data from a server, wherein the display object data comprises display data used for displaying the rendered display object.
According to an embodiment of the present disclosure, the acquiring display object data further includes:
generating a rendering mesh for rendering a display object according to a result of the segmentation of the display object,
wherein the displaying the display object according to the display object data includes:
and rendering the display object according to the rendering grid.
According to an embodiment of the present disclosure, rendering a display object according to a rendering grid includes:
rendering the display object based on the vertex of the surface layer of the display object in the rendering mesh according to the condition that no sub-display object which is not displayed exists in the display object.
According to an embodiment of the present disclosure, the generating a rendering mesh for rendering a display object according to a segmentation result of the display object further includes:
determining a sub-display object related to the interaction according to the interaction data of the interaction application object and the display object or the interaction body corresponding to the display object and the segmentation result of the display object;
and modifying the index information of the grids of the sub-display objects related to the interaction in the rendering grids according to the sub-display objects related to the interaction.
According to an embodiment of the present disclosure, wherein the display object is a three-dimensional display object, and the rendering mesh is a three-dimensional rendering mesh, modifying index information of meshes of sub-display objects involved in interaction in the rendering mesh, includes:
modifying index information of a mesh of a sub-display object involved in an interaction in the rendering mesh such that an inner wall of a gap or hole through the three-dimensional display object is rendered in the rendering mesh of the sub-display object that is not being displayed.
According to the embodiment of the present disclosure, the display object is a three-dimensional display object with a multilayer structure, and the corresponding interaction bodies are arranged between the layers of the three-dimensional display object.
According to an embodiment of the present disclosure, rendering a display object according to a rendering grid includes:
and performing mapping rendering on the grids of the sub-display objects involved in the interaction in the rendering grids.
According to an embodiment of the present disclosure, the performing a map rendering on a grid of sub-display objects involved in an interaction in the rendering grid includes:
and utilizing at least one map to map and render the boundary of the grid of the sub-display object related to the interaction in the rendered grid and the area adjacent to the boundary.
According to an embodiment of the present disclosure, the performing a map rendering on a grid of sub-display objects involved in an interaction in the rendering grid includes:
clearing current chartlet template data of the rendering grid;
rendering an object template of the current display object;
and according to the object template, performing mapping rendering on the grids of the sub-display objects related to the interaction in the rendering grids.
According to an embodiment of the present disclosure, wherein the sub display object has a sub display object collision volume for enabling an interaction effect of the sub display object with an interaction applying object,
wherein the generating a rendering mesh for rendering a display object according to the segmentation result of the display object further comprises:
modifying index information of a mesh of a sub-display object involved in an interaction in the rendering mesh to delete the sub-display object collision volume, or to set the sub-display object collision volume as unavailable, or to set a volume of the sub-display object collision volume as 0, according to interaction data of an interaction applying object and the sub-display object collision volume.
According to the embodiment of the present disclosure, the rendering the display object according to the rendering grid includes at least one of the following operations:
applying a visual basis characteristic to the rendering mesh;
applying texture characteristics to the render mesh;
applying a lighting characteristic to the rendering mesh.
Fig. 6 shows a block diagram of the structure of an object processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 6, an object processing apparatus includes:
a fourth obtaining module 601, configured to obtain display object data, where the display object data includes sub-display object data of a sub-display object obtained by dividing a display object;
a first sending module 602 configured to send the display object data to a client for displaying the display object.
As shown in fig. 6, according to an embodiment of the present disclosure, the object processing apparatus further includes:
a second segmentation module 603 configured to locally segment the display object at the server to obtain sub-display object data of the sub-display object.
As shown in fig. 6, according to an embodiment of the present disclosure, the object processing apparatus further includes:
a fifth obtaining module 604 configured to obtain sub-display object data of the sub-display object from the client.
According to an embodiment of the present disclosure, wherein the segmenting the display object includes:
and segmenting the display object according to a preset mode.
According to an embodiment of the present disclosure, the segmenting the display object includes:
and determining the position related to the interaction of the display object according to the interaction data of the interaction application object and the display object or the corresponding interaction body of the display object, and segmenting the display object according to the position related to the interaction of the display object.
According to an embodiment of the present disclosure, the object processing apparatus further includes:
and determining the sub-display object data of the sub-display object related to the interaction according to the interaction data of the interaction application object and the display object or the corresponding interaction body of the display object.
According to an embodiment of the present disclosure, wherein the interaction data comprises at least one or more of: interactive operation type, interactive operation angle, interactive operation speed, interactive operation strength and attributes of the interactive application object.
As shown in fig. 6, according to an embodiment of the present disclosure, the object processing apparatus further includes:
a sixth obtaining module 605 configured to obtain display object update data, the display object update data including updated sub-display object data of a specified sub-display object and/or additional display object data of an additional display object, the display object update data being used for rendering or displaying an updated display object.
According to an embodiment of the present disclosure, wherein the updated display object has changed, compared to the display object, at least one or more of the following of the designated sub-display objects: color, shape, transparency, texture, pattern, material.
According to an embodiment of the present disclosure, wherein the additional display object data is used for determining at least one or more of the following for the additional display object: position, color, shape, transparency, texture, pattern, material.
According to an embodiment of the present disclosure, the object processing apparatus further includes:
determining, from interaction data of an interaction application object with the display object or with a corresponding interaction body of the display object, at least one or more of the following to generate the display object update data: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
According to an embodiment of the present disclosure, wherein the interaction data comprises at least one or more of: interactive operation type, interactive operation angle, interactive operation speed, interactive operation strength and attributes of the interactive application object.
According to an embodiment of the present disclosure, wherein the determining of the interaction data of the interaction application object with the display object or with the corresponding interaction body of the display object according to the interaction data determines at least one or more of the following to generate the display object update data: the designated child display object, the updated child display object data of the designated child display object, the additional display object data, comprising:
determining, from interaction data of interaction enforcement objects from one or more clients with the display object or with corresponding interactors of the display object, at least one or more of: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
According to an embodiment of the present disclosure, the object processing apparatus further includes:
and synchronously sending the display object updating data to the plurality of clients.
According to an embodiment of the present disclosure, the object processing apparatus, wherein:
the display object update data comprises whole rendering data of the updated display object or whole display data of the updated display object; or
The display object update data comprises incremental rendering data or incremental display data of the updated display object compared to the display object.
According to an embodiment of the present disclosure, the object processing apparatus, wherein:
the acquiring of the display object update data includes: and locally acquiring the display object updating data from a server or acquiring the display object updating data from a client.
According to an embodiment of the present disclosure, the sending the display object data to a client for displaying the display object includes:
transmitting only a portion of the display object data to the client that is different from display object data previously transmitted to the client.
According to an embodiment of the present disclosure, the sending the display object data to a client for displaying the display object includes:
sending display object data to a client for displaying the display object, wherein the display object data comprises display data for displaying the rendered display object.
According to an embodiment of the present disclosure, the acquiring display object data further includes:
generating a rendering grid for rendering the display object according to the segmentation result of the display object;
and rendering the display object according to the rendering grid.
According to an embodiment of the present disclosure, rendering a display object according to a rendering grid includes:
rendering the display object based on the vertex of the surface layer of the display object in the rendering mesh according to the condition that no sub-display object which is not displayed exists in the display object.
According to an embodiment of the present disclosure, the generating a rendering mesh for rendering a display object according to a segmentation result of the display object further includes:
determining a sub-display object related to the interaction according to the interaction data of the interaction application object and the display object or the interaction body corresponding to the display object and the segmentation result of the display object;
and modifying the index information of the grids of the sub-display objects related to the interaction in the rendering grids according to the sub-display objects related to the interaction.
According to an embodiment of the present disclosure, wherein the display object is a three-dimensional display object, and the rendering mesh is a three-dimensional rendering mesh, modifying index information of meshes of sub-display objects involved in interaction in the rendering mesh, includes:
modifying index information of a mesh of a sub-display object involved in an interaction in the rendering mesh such that an inner wall of a gap or hole through the three-dimensional display object is rendered in the rendering mesh of the sub-display object that is not being displayed.
According to the embodiment of the present disclosure, the display object is a three-dimensional display object with a multilayer structure, and the corresponding interaction bodies are arranged between the layers of the three-dimensional display object.
According to an embodiment of the present disclosure, rendering a display object according to a rendering grid includes:
and performing mapping rendering on the grids of the sub-display objects involved in the interaction in the rendering grids.
According to an embodiment of the present disclosure, the performing a map rendering on a grid of sub-display objects involved in an interaction in the rendering grid includes:
and utilizing at least one map to map and render the boundary of the grid of the sub-display object related to the interaction in the rendered grid and the area adjacent to the boundary.
According to an embodiment of the present disclosure, the performing a map rendering on a grid of sub-display objects involved in an interaction in the rendering grid includes:
clearing current chartlet template data of the rendering grid;
rendering an object template of the current display object;
and according to the object template, performing mapping rendering on the grids of the sub-display objects related to the interaction in the rendering grids.
According to an embodiment of the present disclosure, wherein the sub display object has a sub display object collision volume for enabling an interaction effect of the sub display object with an interaction applying object,
wherein the generating a rendering mesh for rendering a display object according to the segmentation result of the display object further comprises:
modifying index information of a mesh of a sub-display object involved in an interaction in the rendering mesh to delete the sub-display object collision volume, or to set the sub-display object collision volume as unavailable, or to set a volume of the sub-display object collision volume as 0, according to interaction data of an interaction applying object and the sub-display object collision volume.
According to the embodiment of the present disclosure, the rendering the display object according to the rendering grid includes at least one of the following operations:
applying a visual basis characteristic to the rendering mesh;
applying texture characteristics to the render mesh;
applying a lighting characteristic to the rendering mesh.
Fig. 7 shows a block diagram of a virtual object processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 7, a virtual object processing apparatus includes:
a seventh obtaining module 701 configured to obtain virtual object data, wherein the virtual object data includes sub-virtual object data of a sub-virtual object obtained by dividing a virtual object;
a processing module 702 configured to display the virtual object according to the virtual object data or to transmit the virtual object data.
Fig. 8 shows a block diagram of a client according to an embodiment of the present disclosure.
As shown in fig. 8, a client includes:
an eighth obtaining module 801 configured to obtain display object data, where the display object data includes sub-display object data of a sub-display object obtained by dividing a display object;
a third display module 802 configured to display the display object according to the display object data.
Fig. 9 shows a block diagram of a server according to an embodiment of the present disclosure.
As shown in fig. 9, a server includes:
a ninth obtaining module 901, configured to obtain display object data, where the display object data includes sub-display object data of a sub-display object obtained by dividing a display object;
a second sending module 902 configured to send the display object data to a client for displaying the display object.
According to an embodiment of the present disclosure, there is provided a virtual reality object modification method, including:
acquiring virtual reality object data of a virtual reality object, wherein the virtual reality object data comprises sub-virtual reality object data of sub-virtual reality objects obtained by dividing the virtual reality object;
determining a sub-virtual reality object involved in the modification according to the modification data of the modification applied object and the virtual reality object;
acquiring virtual reality object modification data, wherein the virtual reality object modification data comprise modified sub-virtual reality object data of a sub-virtual reality object involved in modification and/or additional virtual reality object data of an additional virtual reality object, and the virtual reality object modification data are used for rendering or displaying a modification result of the virtual reality object.
According to embodiments of the present disclosure, virtual Reality may refer to vr (virtual Reality), the concept of which may also include Augmented Reality (AR). The concepts of VR and AR can be derived from the related art and are not described in detail herein.
According to embodiments of the present disclosure, a virtual reality object may refer to an item presented in a virtual reality scene, e.g., a building including walls, floors, ceilings, etc. According to the embodiment of the present disclosure, the manner of obtaining the sub virtual reality object by dividing the virtual reality object may be the same as the manner of obtaining the sub display object by dividing the display object in the foregoing embodiment.
According to embodiments of the present disclosure, "retrofitting" may refer to retrofitting a building that includes walls, floors, ceilings, and the like. According to embodiments of the present disclosure, the manner of adaptation may be various manners of interacting with the display object as discussed in the foregoing embodiments. For example, a hole can be dug or dug in a wall, floor, ceiling by a tool such as a spade pick or crowbar. That is, the manner of remodeling may include damage to or alteration of a virtual reality object, such as a wall, floor, ceiling, or a sub-virtual reality object thereof (e.g., a wall block).
According to an embodiment of the present disclosure, the virtual reality object modification data comprises integral rendering data of the modified virtual reality object or integral display data of the modified virtual reality object; or the virtual reality object modification data comprises incremental rendering data or incremental display data of the modified virtual reality object compared to a previous virtual reality object.
According to the embodiment of the disclosure, when the virtual reality object is modified, for example, damaged, the modification result of the virtual reality object can be rendered or displayed, so that the sub-display object involved in the modification is changed, and the virtual reality object is updated. In this way, the data of the whole virtual reality object does not need to be updated every time the virtual reality object changes, so that the calculation load is obviously reduced, the resource overhead is reduced, the real-time performance of display is improved, and the interaction efficiency, the real-time performance and the accuracy are improved under the multi-party interaction application scene. Moreover, virtual reality object adaptation schemes according to embodiments of the present disclosure may be used to provide virtual reality schemes for building finishing. For example, the effects of a finishing plan can be intuitively understood by making various modifications to a building in a virtual reality scenario.
According to an embodiment of the present disclosure, there is provided a virtual reality object modification apparatus, including:
an acquisition module configured to acquire virtual reality object data of a virtual reality object, wherein the virtual reality object data includes sub-virtual reality object data of a sub-virtual reality object obtained by segmenting the virtual reality object;
a virtual reality adaptation module configured to determine a sub-virtual reality object involved in an adaptation according to adaptation data of an adaptation application object and the virtual reality object;
a virtual reality rendering module configured to obtain virtual reality object modification data, the virtual reality object modification data including modified sub-virtual reality object data of a sub-virtual reality object involved in the modification and/or additional virtual reality object data of an additional virtual reality object, the virtual reality object modification data being used to render or display a modification result of the virtual reality object.
According to the embodiment of the disclosure, the virtual reality object modification scheme can be applied to the virtual reality device in the related art to realize the virtual reality object modification device.
The embodiment of the present disclosure also discloses an electronic device, which includes a memory and a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to perform any of the method steps described above.
Fig. 10 shows a schematic structural diagram of a computer system suitable for implementing an object processing method according to an embodiment of the present disclosure.
As shown in fig. 10, the computer system 1000 includes a processing unit 1001 that can execute various processes in the above-described embodiments according to a program stored in a Read Only Memory (ROM)1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. In the RAM1003, various programs and data necessary for the operation of the system 1000 are also stored. The processing unit 1001, the ROM1002, and the RAM1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
The following components are connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output section 1007 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 1008 including a hard disk and the like; and a communication section 1009 including a network interface card such as a LAN card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The driver 1010 is also connected to the I/O interface 1005 as necessary. A removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1010 as necessary, so that a computer program read out therefrom is mounted into the storage section 1008 as necessary. The processing unit 1001 may be implemented as a CPU, a GPU, a TPU, an FPGA, an NPU, or other processing units.
In particular, the above described methods may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a medium readable thereby, the computer program containing program code for performing the object processing method. In such embodiments, the computer program may be downloaded and installed from a network through the communication section 1009 and/or installed from the removable medium 1011.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowcharts or block diagrams may represent a module, a program segment, or a portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present disclosure may be implemented by software or hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation of the units or modules themselves.
As another aspect, the disclosed embodiment also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the apparatus in the foregoing embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods described in the embodiments of the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept. For example, the above features and (but not limited to) the features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (67)

1. An object processing method, comprising:
acquiring display object data, wherein the display object data comprises sub-display object data of a sub-display object obtained by dividing the display object;
and displaying the display object according to the display object data.
2. The method of claim 1, further comprising:
and locally dividing the display object at the client to obtain the sub-display object data of the sub-display object.
3. The method of claim 1, further comprising:
and acquiring the sub-display object data of the sub-display object from the server.
4. The method of any of claims 1-3, wherein the segmenting the display object comprises:
and segmenting the display object according to a preset mode.
5. The method of any of claims 1-3, the segmenting the display object, comprising:
and determining the position related to the interaction of the display object according to the interaction data of the interaction application object and the display object or the corresponding interaction body of the display object, and segmenting the display object according to the position related to the interaction of the display object.
6. The method of claim 5, further comprising:
and determining the sub-display object data of the sub-display object related to the interaction according to the interaction data of the interaction application object and the display object or the corresponding interaction body of the display object.
7. The method of claim 4 or 5, wherein the interaction data comprises at least one or more of: interactive operation type, interactive operation angle, interactive operation speed, interactive operation strength and attributes of the interactive application object.
8. The method of any of claims 1-7, further comprising:
acquiring display object update data, wherein the display object update data comprises update sub-display object data of a designated sub-display object and/or additional display object data of an additional display object, and the display object update data is used for rendering or displaying the update display object.
9. The method of claim 8, wherein at least one or more of the following of the designated child display objects is changed by the updated display object as compared to the display object: color, shape, transparency, texture, pattern, material.
10. The method of claim 8 or 9, wherein the additional display object data is used to determine at least one or more of the following for the additional display object: position, color, shape, transparency, texture, pattern, material.
11. The method according to any one of claims 8-10, further comprising:
determining, from interaction data of an interaction application object with the display object or with a corresponding interaction body of the display object, at least one or more of the following to generate the display object update data: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
12. The method of claim 11, wherein the interaction data comprises at least one or more of: interactive operation type, interactive operation angle, interactive operation speed, interactive operation strength and attributes of the interactive application object.
13. The method of claim 11, wherein the interaction data of the interaction application object with the display object or with the corresponding interaction body of the display object is dependent on the determining at least one or more of the following to generate the display object update data: the designated child display object, the updated child display object data of the designated child display object, the additional display object data, comprising:
determining, from interaction data of interaction enforcement objects from one or more clients with the display object or with corresponding interactors of the display object, at least one or more of: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
14. The method of claim 13, further comprising:
and a plurality of clients synchronously receive the display object updating data.
15. The method of claim 11, wherein the interaction data of the interaction application object with the display object or with the corresponding interaction body of the display object is dependent on the determining at least one or more of the following to generate the display object update data: the designated child display object, the updated child display object data of the designated child display object, the additional display object data, comprising:
according to interaction data of an interaction application object and the display object or an interaction body corresponding to the display object sent by a server from one or more clients, determining at least one or more of the following items: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
16. The method of any of claims 8-15, wherein:
the display object update data comprises whole rendering data of the updated display object or whole display data of the updated display object; or
The display object update data comprises incremental rendering data or incremental display data of the updated display object compared to the display object.
17. The method of any of claims 8-15, wherein:
the acquiring of the display object update data includes: and acquiring the display object updating data from a server side or locally acquiring the display object updating data from a client side.
18. The method according to any one of claims 8-15, further comprising: and displaying the updated display object according to the display object updating data.
19. The method of claim 1, the obtaining display object data, comprising:
and acquiring display object data from a server, wherein the display object data comprises display data used for displaying the rendered display object.
20. The method of any of claims 1-3, the obtaining display object data, further comprising:
generating a rendering mesh for rendering a display object according to a result of the segmentation of the display object,
wherein the displaying the display object according to the display object data includes:
and rendering the display object according to the rendering grid.
21. The method of claim 20, the rendering display objects according to a rendering grid, comprising:
rendering the display object based on the vertex of the surface layer of the display object in the rendering mesh according to the condition that no sub-display object which is not displayed exists in the display object.
22. The method of claim 20, the generating a rendering mesh for rendering a display object from a segmentation result of the display object, further comprising:
determining a sub-display object related to the interaction according to the interaction data of the interaction application object and the display object or the interaction body corresponding to the display object and the segmentation result of the display object;
and modifying the index information of the grids of the sub-display objects related to the interaction in the rendering grids according to the sub-display objects related to the interaction.
23. The method of claim 22, wherein the display object is a three-dimensional display object and the rendering mesh is a three-dimensional rendering mesh, modifying index information for meshes of sub-display objects involved in interactions in the rendering mesh, comprising:
modifying index information of a mesh of a sub-display object involved in an interaction in the rendering mesh such that an inner wall of a gap or hole through the three-dimensional display object is rendered in the rendering mesh of the sub-display object that is not being displayed.
24. The method according to claim 22, wherein the display object is a three-dimensional display object of a multi-layer structure, and the corresponding interaction bodies are arranged between the layers of the three-dimensional display object.
25. The method of claim 22, the rendering display objects according to a rendering grid, comprising:
and performing mapping rendering on the grids of the sub-display objects involved in the interaction in the rendering grids.
26. The method of claim 25, the charting a grid of sub-display objects involved in an interaction in the rendered grid, comprising:
and utilizing at least one map to map and render the boundary of the grid of the sub-display object related to the interaction in the rendered grid and the area adjacent to the boundary.
27. The method of claim 25, the charting a grid of sub-display objects involved in an interaction in the rendered grid, comprising:
clearing current chartlet template data of the rendering grid;
rendering an object template of the current display object;
and according to the object template, performing mapping rendering on the grids of the sub-display objects related to the interaction in the rendering grids.
28. The method of claim 20, wherein the sub display object has a sub display object collision volume for enabling an interaction effect of the sub display object with an interaction applying object,
wherein the generating a rendering mesh for rendering a display object according to the segmentation result of the display object further comprises:
modifying index information of a mesh of a sub-display object involved in an interaction in the rendering mesh to delete the sub-display object collision volume, or to set the sub-display object collision volume as unavailable, or to set a volume of the sub-display object collision volume as 0, according to interaction data of an interaction applying object and the sub-display object collision volume.
29. The method of claim 20, wherein the rendering the display object according to the rendering grid comprises at least one of:
applying a visual basis characteristic to the rendering mesh;
applying texture characteristics to the render mesh;
applying a lighting characteristic to the rendering mesh.
30. An object processing method, comprising:
acquiring display object data, wherein the display object data comprises sub-display object data of a sub-display object obtained by dividing the display object;
and sending the display object data to a client for displaying the display object.
31. The method of claim 30, further comprising:
and locally dividing the display object at the server to obtain the sub-display object data of the sub-display object.
32. The method of claim 30, further comprising:
and acquiring the sub-display object data of the sub-display object from the client.
33. The method of any of claims 30-32, wherein the segmenting the display object comprises:
and segmenting the display object according to a preset mode.
34. The method of any of claims 30-32, the segmenting the display object, comprising:
and determining the position related to the interaction of the display object according to the interaction data of the interaction application object and the display object or the corresponding interaction body of the display object, and segmenting the display object according to the position related to the interaction of the display object.
35. The method of claim 34, further comprising:
and determining the sub-display object data of the sub-display object related to the interaction according to the interaction data of the interaction application object and the display object or the corresponding interaction body of the display object.
36. The method of any of claims 33-35, wherein the interaction data comprises at least one or more of: interactive operation type, interactive operation angle, interactive operation speed, interactive operation strength and attributes of the interactive application object.
37. The method according to any of claims 30-36, further comprising:
acquiring display object update data, wherein the display object update data comprises update sub-display object data of a designated sub-display object and/or additional display object data of an additional display object, and the display object update data is used for rendering or displaying the update display object.
38. The method of claim 37, wherein at least one or more of the following of the designated child display objects is changed by the updated display object as compared to the display object: color, shape, transparency, texture, pattern, material.
39. The method of claim 37 or 38, wherein the additional display object data is used to determine at least one or more of the following for the additional display object: position, color, shape, transparency, texture, pattern, material.
40. The method according to any one of claims 37-39, further comprising:
determining, from interaction data of an interaction application object with the display object or with a corresponding interaction body of the display object, at least one or more of the following to generate the display object update data: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
41. The method of claim 40, wherein the interaction data comprises at least one or more of: interactive operation type, interactive operation angle, interactive operation speed, interactive operation strength and attributes of the interactive application object.
42. The method of claim 40, wherein the interaction data of the application object with the display object or with the corresponding interaction body of the display object in accordance with the interaction determines at least one or more of the following to generate the display object update data: the designated child display object, the updated child display object data of the designated child display object, the additional display object data, comprising:
determining, from interaction data of interaction enforcement objects from one or more clients with the display object or with corresponding interactors of the display object, at least one or more of: the designated sub-display object, updated sub-display object data of the designated sub-display object, the additional display object data.
43. The method of claim 42, further comprising:
and synchronously sending the display object updating data to the plurality of clients.
44. The method of any one of claims 37-43, wherein:
the display object update data comprises whole rendering data of the updated display object or whole display data of the updated display object; or
The display object update data comprises incremental rendering data or incremental display data of the updated display object compared to the display object.
45. The method of any one of claims 37-43, wherein:
the acquiring of the display object update data includes: and locally acquiring the display object updating data from a server or acquiring the display object updating data from a client.
46. The method of any of claims 30-32, wherein said transmitting said display object data to a client for displaying said display object comprises:
transmitting only a portion of the display object data to the client that is different from display object data previously transmitted to the client.
47. The method of claim 30, the sending the display object data to a client for displaying the display object, comprising:
sending display object data to a client for displaying the display object, wherein the display object data comprises display data for displaying the rendered display object.
48. The method of any of claims 30-32, wherein the obtaining display object data further comprises:
generating a rendering grid for rendering the display object according to the segmentation result of the display object;
and rendering the display object according to the rendering grid.
49. The method of claim 48, the rendering display objects according to a rendering grid, comprising:
rendering the display object based on the vertex of the surface layer of the display object in the rendering mesh according to the condition that no sub-display object which is not displayed exists in the display object.
50. The method of claim 48, the generating a rendering mesh for rendering a display object from segmentation results for the display object, further comprising:
determining a sub-display object related to the interaction according to the interaction data of the interaction application object and the display object or the interaction body corresponding to the display object and the segmentation result of the display object;
and modifying the index information of the grids of the sub-display objects related to the interaction in the rendering grids according to the sub-display objects related to the interaction.
51. The method of claim 50, wherein the display object is a three-dimensional display object and the rendering mesh is a three-dimensional rendering mesh, modifying index information for meshes of sub-display objects involved in interactions in the rendering mesh, comprising:
modifying index information of a mesh of a sub-display object involved in an interaction in the rendering mesh such that an inner wall of a gap or hole through the three-dimensional display object is rendered in the rendering mesh of the sub-display object that is not being displayed.
52. The method of claim 50, wherein the display object is a three-dimensional display object with a multi-layer structure, and the corresponding interaction bodies are arranged between the layers of the three-dimensional display object.
53. The method of claim 50, the rendering display objects according to a rendering grid, comprising:
and performing mapping rendering on the grids of the sub-display objects involved in the interaction in the rendering grids.
54. The method of claim 53, the charting a grid of sub-display objects involved in an interaction in the rendered grid, comprising:
and utilizing at least one map to map and render the boundary of the grid of the sub-display object related to the interaction in the rendered grid and the area adjacent to the boundary.
55. The method of claim 53, the charting a grid of sub-display objects involved in an interaction in the rendered grid, comprising:
clearing current chartlet template data of the rendering grid;
rendering an object template of the current display object;
and according to the object template, performing mapping rendering on the grids of the sub-display objects related to the interaction in the rendering grids.
56. The method of claim 48, wherein the sub-display object has a sub-display object collision volume for enabling an interaction effect of the sub-display object with an interaction applying object,
wherein the generating a rendering mesh for rendering a display object according to the segmentation result of the display object further comprises:
modifying index information of a mesh of a sub-display object involved in an interaction in the rendering mesh to delete the sub-display object collision volume, or to set the sub-display object collision volume as unavailable, or to set a volume of the sub-display object collision volume as 0, according to interaction data of an interaction applying object and the sub-display object collision volume.
57. The method of claim 48, wherein said rendering the display object according to the rendering grid comprises at least one of:
applying a visual basis characteristic to the rendering mesh;
applying texture characteristics to the render mesh;
applying a lighting characteristic to the rendering mesh.
58. A virtual object processing method, comprising:
acquiring virtual object data, wherein the virtual object data comprises sub-virtual object data of sub-virtual objects obtained by dividing virtual objects;
and displaying the virtual object according to the virtual object data or sending the virtual object data.
59. An object processing apparatus comprising:
a first obtaining module configured to obtain display object data, wherein the display object data includes sub-display object data of a sub-display object obtained by dividing a display object;
a first display module configured to display the display object according to the display object data.
60. An object processing apparatus comprising:
a fourth obtaining module configured to obtain display object data, wherein the display object data includes sub-display object data of a sub-display object obtained by dividing a display object;
a first sending module configured to send the display object data to a client for displaying the display object.
61. A virtual object processing apparatus, comprising:
a seventh obtaining module configured to obtain virtual object data, wherein the virtual object data includes sub-virtual object data of a sub-virtual object obtained by dividing a virtual object;
a processing module configured to display the virtual object according to the virtual object data or to transmit the virtual object data.
62. A client, comprising:
an eighth obtaining module configured to obtain display object data, wherein the display object data includes sub-display object data of a sub-display object obtained by dividing a display object;
a third display module configured to display the display object according to the display object data.
63. A server, comprising:
a ninth obtaining module configured to obtain display object data, wherein the display object data includes sub-display object data of a sub-display object obtained by dividing a display object;
a second sending module configured to send the display object data to a client for displaying the display object.
64. A virtual reality object modification method, comprising:
acquiring virtual reality object data of a virtual reality object, wherein the virtual reality object data comprises sub-virtual reality object data of sub-virtual reality objects obtained by dividing the virtual reality object;
determining a sub-virtual reality object involved in the modification according to the modification data of the modification applied object and the virtual reality object;
acquiring virtual reality object modification data, wherein the virtual reality object modification data comprise modified sub-virtual reality object data of a sub-virtual reality object involved in modification and/or additional virtual reality object data of an additional virtual reality object, and the virtual reality object modification data are used for rendering or displaying a modification result of the virtual reality object.
65. A virtual reality object modification apparatus, comprising:
an acquisition module configured to acquire virtual reality object data of a virtual reality object, wherein the virtual reality object data includes sub-virtual reality object data of a sub-virtual reality object obtained by segmenting the virtual reality object;
a virtual reality adaptation module configured to determine a sub-virtual reality object involved in an adaptation according to adaptation data of an adaptation application object and the virtual reality object;
a virtual reality rendering module configured to obtain virtual reality object modification data, the virtual reality object modification data including modified sub-virtual reality object data of a sub-virtual reality object involved in the modification and/or additional virtual reality object data of an additional virtual reality object, the virtual reality object modification data being used to render or display a modification result of the virtual reality object.
66. An electronic device comprising a memory and a processor; wherein the content of the first and second substances,
the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method steps of any of claims 1-58, 64.
67. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, carry out the method steps of any of claims 1-58, 64.
CN202010351778.2A 2020-04-28 2020-04-28 Object processing method, client, server, electronic device and storage medium Pending CN113289348A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010351778.2A CN113289348A (en) 2020-04-28 2020-04-28 Object processing method, client, server, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010351778.2A CN113289348A (en) 2020-04-28 2020-04-28 Object processing method, client, server, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN113289348A true CN113289348A (en) 2021-08-24

Family

ID=77317976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010351778.2A Pending CN113289348A (en) 2020-04-28 2020-04-28 Object processing method, client, server, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113289348A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6392644B1 (en) * 1998-05-25 2002-05-21 Fujitsu Limited Three-dimensional graphics display system
CN102160086A (en) * 2008-07-22 2011-08-17 索尼在线娱乐有限公司 System and method for physics interactions in a simulation
CN106215419A (en) * 2016-07-28 2016-12-14 腾讯科技(深圳)有限公司 Collision control method and device
CN108579085A (en) * 2018-03-12 2018-09-28 腾讯科技(深圳)有限公司 Treating method and apparatus, storage medium and the electronic device of barrier collision
CN110384924A (en) * 2019-08-21 2019-10-29 网易(杭州)网络有限公司 The display control method of virtual objects, device, medium and equipment in scene of game

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6392644B1 (en) * 1998-05-25 2002-05-21 Fujitsu Limited Three-dimensional graphics display system
CN102160086A (en) * 2008-07-22 2011-08-17 索尼在线娱乐有限公司 System and method for physics interactions in a simulation
CN106215419A (en) * 2016-07-28 2016-12-14 腾讯科技(深圳)有限公司 Collision control method and device
CN108579085A (en) * 2018-03-12 2018-09-28 腾讯科技(深圳)有限公司 Treating method and apparatus, storage medium and the electronic device of barrier collision
CN110384924A (en) * 2019-08-21 2019-10-29 网易(杭州)网络有限公司 The display control method of virtual objects, device, medium and equipment in scene of game

Similar Documents

Publication Publication Date Title
US5841441A (en) High-speed three-dimensional texture mapping systems and methods
WO2022206314A1 (en) Collision data processing method and apparatus, and computer device and storage medium
US20020198047A1 (en) Game device
EP2023293B1 (en) Non-photorealistic contour rendering
EP2158948A2 (en) Image generation system, image generation method, and information storage medium
CN111192354A (en) Three-dimensional simulation method and system based on virtual reality
US7696995B2 (en) System and method for displaying the effects of light illumination on a surface
CN113559504A (en) Information processing method, information processing apparatus, storage medium, and electronic device
JP2023018098A (en) Program, Game Device, Server Device, and Game Providing Method
EP3930863A1 (en) Method for selecting virtual objects, apparatus, terminal and storage medium
JP2024522067A (en) User interface display method, device, equipment, and computer program
CN106570926A (en) Efficient particle cloud drawing method in flight scene simulation
CN106683155A (en) Three-dimensional model comprehensive dynamic scheduling method
JP2005319029A (en) Program, information storage medium, and image generating system
CN113289348A (en) Object processing method, client, server, electronic device and storage medium
CN116402931A (en) Volume rendering method, apparatus, computer device, and computer-readable storage medium
CN115430153A (en) Collision detection method, device, apparatus, medium, and program in virtual environment
CN113313796B (en) Scene generation method, device, computer equipment and storage medium
JP2011065396A (en) Program, information storage medium, and object generation system
CN113499583A (en) Virtual object control method, device, terminal and storage medium
CN112717390A (en) Virtual scene display method, device, equipment and storage medium
CN115803782A (en) Augmented reality effect of perception geometry with real-time depth map
KR20020041387A (en) Solid-Model Type's third dimension GIS DB construction automation method and third dimension Space DB use method to take advantage of second dimensions space information
CN113694519B (en) Applique effect processing method and device, storage medium and electronic equipment
CN107111896B (en) Updating damaged enhanced three-dimensional polygonal meshes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20221017

Address after: China Hongkong Tongluowan 33 hysanavenue Lee Garden Phase 19 Building Room 1901

Applicant after: Lingxi Interactive Entertainment Holding Co.,Ltd.

Address before: Box 847, four, Grand Cayman capital, Cayman Islands, UK

Applicant before: ALIBABA GROUP HOLDING Ltd.