CN116112742A - Image rendering method, device, medium and equipment under virtual interaction scene - Google Patents

Image rendering method, device, medium and equipment under virtual interaction scene Download PDF

Info

Publication number
CN116112742A
CN116112742A CN202310122524.7A CN202310122524A CN116112742A CN 116112742 A CN116112742 A CN 116112742A CN 202310122524 A CN202310122524 A CN 202310122524A CN 116112742 A CN116112742 A CN 116112742A
Authority
CN
China
Prior art keywords
client
virtual
global
image
dimensional image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310122524.7A
Other languages
Chinese (zh)
Inventor
李林生
秦晓康
杨斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202310122524.7A priority Critical patent/CN116112742A/en
Publication of CN116112742A publication Critical patent/CN116112742A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The specification discloses an image rendering method, device, medium and equipment under a virtual interactive scene. The image rendering method under the virtual interactive scene comprises the following steps: receiving operation instructions sent by all clients, determining state change data of all virtual objects in a virtual interaction scene according to the operation instructions, generating a global three-dimensional image aiming at the virtual interaction scene according to the state change data, aiming at each client, cutting the image in the global three-dimensional image according to the position and the view angle based on the virtual interaction scene when the client sends the operation instructions, and sending the cut image data to the client for rendering.

Description

Image rendering method, device, medium and equipment under virtual interaction scene
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method, an apparatus, a medium, and a device for image rendering in a virtual interactive scene.
Background
Along with the development of science and technology, virtual interaction scenes are widely applied to fields such as virtual online conferences and multi-person online virtual interactions, and each user involved can participate in interaction and communication in the virtual environment by rendering pictures under different client viewing angles with the client as a medium.
However, when the clients render the pictures in the virtual environment, the server generally renders the pictures of each client independently, that is, the pictures sent to each client by the server need to be subjected to one image rendering operation, and as the number of clients increases, the number of images rendered by the server increases, so that the system resources of the server are occupied greatly, and the image rendering efficiency is seriously reduced.
Therefore, how to optimize the rendering mode of the client for rendering the pictures in the virtual environment, reduce the occupation of the system resources in the rendering process, and improve the image rendering efficiency is a problem to be solved urgently.
Disclosure of Invention
The specification provides an image rendering method, device, medium and equipment under a virtual interactive scene, so as to reduce occupation of system resources in an image rendering process and improve image rendering efficiency.
The technical scheme adopted in the specification is as follows:
the specification provides an image rendering method under a virtual interactive scene, which comprises the following steps:
receiving an operation instruction sent by each client;
according to the operation instruction, determining state change data of each virtual object in the virtual interaction scene;
generating a global three-dimensional image aiming at the virtual interaction scene according to the state change data;
and aiming at each client, according to the position and the view angle based on the virtual interaction scene when the client sends the operation instruction, performing image clipping in the global three-dimensional image, and sending the clipped image data to the client for rendering.
Optionally, determining state change data of each virtual object in the virtual interaction scene specifically includes:
and determining state change data of each virtual object in the virtual interaction scene according to the operation instruction and the environment information in the virtual interaction scene.
Optionally, generating a global three-dimensional image for the virtual interactive scene according to the state change data specifically includes:
acquiring an initial global three-dimensional image, and determining initial state information corresponding to each virtual object in the initial global three-dimensional image;
updating the initial state information according to the state change data to obtain updated state information corresponding to each virtual object;
and updating the initial global three-dimensional image according to the updated state information to obtain the global three-dimensional image.
Optionally, for each client, performing image cropping in the global three-dimensional image according to a position and a view angle based on the virtual interaction scene when the client sends the operation instruction, which specifically includes:
determining an image cutting window corresponding to the client in the global three-dimensional image according to the position and the view angle based on the virtual interaction scene when the client sends the operation instruction;
and performing image clipping in the global three-dimensional image through the image clipping window.
Optionally, determining, according to the position and the view angle based on the virtual interaction scene when the client sends the operation instruction, an image cropping window corresponding to the client in the global three-dimensional image, which specifically includes:
according to the operation instruction sent by the client, updating the position and the view angle based on the virtual interaction scene when the client sends the operation instruction, and obtaining an updated position and an updated view angle;
and determining an image clipping window corresponding to the client in the global three-dimensional image according to the updated position and the updated view angle.
The present specification provides an image rendering apparatus under a virtual interactive scene, including:
the receiving module receives operation instructions sent by all clients;
the determining module is used for determining state change data of each virtual object in the virtual interaction scene according to the operation instruction;
the generation module is used for generating a global three-dimensional image aiming at the virtual interaction scene according to the state change data;
and the clipping module is used for clipping the image in the global three-dimensional image according to the position and the view angle based on the virtual interaction scene when the client sends the operation instruction, and sending the image data obtained after clipping to the client for rendering.
Optionally, the determining module is specifically configured to determine, according to the operation instruction and the environmental information in the virtual interaction scene, state change data of each virtual object in the virtual interaction scene.
Optionally, the generating module is specifically configured to obtain an initial global three-dimensional image, and determine initial state information corresponding to each virtual object in the initial global three-dimensional image; updating the initial state information according to the state change data to obtain updated state information corresponding to each virtual object; and updating the initial global three-dimensional image according to the updated state information to obtain the global three-dimensional image.
Optionally, the cropping module is specifically configured to determine, according to a position and a view angle based on the virtual interaction scene when the client sends the operation instruction, an image cropping window corresponding to the client in the global three-dimensional image; and performing image clipping in the global three-dimensional image through the image clipping window.
Optionally, the clipping module is specifically configured to update, according to an operation instruction sent by the client, a position and a view angle based on the virtual interaction scene when the client sends the operation instruction, so as to obtain an updated position and an updated view angle; and determining an image clipping window corresponding to the client in the global three-dimensional image according to the updated position and the updated view angle.
The present specification provides a computer readable storage medium storing a computer program which when executed by a processor implements the image rendering method under a virtual interactive scene described above.
The present disclosure provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the image rendering method under the virtual interactive scene when executing the program.
The above-mentioned at least one technical scheme that this specification adopted can reach following beneficial effect:
in the image rendering method under the virtual interaction scene provided by the specification, an operation instruction sent by each client is received, state change data of each virtual object in the virtual interaction scene is determined according to the operation instruction, a global three-dimensional image aiming at the virtual interaction scene is generated according to the state change data, image clipping is carried out on the global three-dimensional image according to the position and the view angle based on the virtual interaction scene when the operation instruction is sent by each client, and the image data obtained after clipping is sent to the client for rendering.
According to the method, in the process of rendering the images of the clients, the global three-dimensional image is rendered firstly, then the three-dimensional image is cut according to the position and the view angle of each client in the virtual interaction scene, so that the server only needs to perform one-time rendering, the server only needs to perform corresponding image cutting in the rendered global three-dimensional image, and the images required to be rendered and displayed by each client can be obtained.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification, illustrate and explain the exemplary embodiments of the present specification and their description, are not intended to limit the specification unduly. In the drawings:
fig. 1 is a schematic flow chart of an image rendering method under a virtual interactive scene provided in the present specification;
fig. 2 is a schematic diagram of an image rendering device under a virtual interactive scene provided in the present disclosure;
fig. 3 is a schematic view of an electronic device corresponding to fig. 1 provided in the present specification.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present specification more apparent, the technical solutions of the present specification will be clearly and completely described below with reference to specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
Fig. 1 is a flow chart of an image rendering method under a virtual interactive scene provided in the present specification, which includes the following steps:
s100: and receiving operation instructions sent by the clients.
In a scenario such as a Virtual online meeting, a Virtual interaction on a multi-person line (e.g., virtual Reality (VR) interaction on a multi-person line), etc., each client corresponds to an instance in the Virtual interaction scenario, and the instance may correspond to a Virtual character that a user manipulates in the Virtual interaction scenario, and the server schedules each client corresponding to an environmental screen in the same Virtual interaction scenario, and feeds back the changed environmental screen to the client in the field of view where the environmental screen exists.
For example, if there are two clients a and B in the virtual interaction scene, after the instance in a generates an operation instruction, the operation instruction is sent to the server, the server calculates the change of the corresponding instance in a and the change of the virtual object in the view of a according to the operation instruction, and renders the changed instance picture and the virtual object picture to be fed back to a, if at this moment, the same environment picture as in a exists in the view of B, the server also renders and feeds back the virtual object picture after the change in the view of B to B, and if no virtual picture as in a exists in the view of B, the server renders and feeds back the instance picture and the virtual object picture in the view of B at this moment to B.
In the process, each time a client exists, a picture corresponding to the client needs to be rendered, so that a large amount of system resources are occupied, and the rendering efficiency is seriously reduced. In this process, the server needs to receive the operation instruction sent by each client.
The client may be a designated device such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, or other devices such as a VR device, and the operation instruction may be an operation instruction input by a user operating the client through a mouse, a keyboard, a touch screen, a VR device, or the like, which is not specifically limited in this specification.
In this specification, an execution body for implementing an image rendering method in a virtual interactive scene may refer to a designated device such as a server, which is disposed on a service platform, and for convenience of description, the present specification uses only the server as an execution body as an example, and describes an image rendering method in a virtual interactive scene provided in the present specification.
S102: and determining state change data of each virtual object in the virtual interaction scene according to the operation instruction.
The user can control the virtual object in the server to generate a series of virtual behaviors by inputting operation instructions in the client. When a user inputs an operation instruction in the client, the client sends the operation instruction to the server, so that the server generates state change data corresponding to the virtual object after executing the operation instruction.
The virtual object may be a virtual role (an instance corresponding to a user client) that a user controls in a virtual interaction scene, or may be another virtual object associated with an operation instruction of the user.
When the operation instruction controls the virtual character to move forward, backward, run, jump and the like, the pose state of the virtual character is correspondingly changed, and state change data corresponding to the virtual character is generated at the moment.
When the user controls the virtual character to interact with other virtual objects in the virtual interaction scene according to the control instruction, the states of the other virtual objects also change, for example, when the user controls the virtual character to ignite a virtual object 'paper' in the virtual interaction scene through the client, the state of the 'paper' can change into a burning state, so that the server can also take the 'paper' as a virtual object and determine corresponding state change data of the virtual object.
It should be noted that, the operation instruction may be sent by at least one client currently communicating with the server, and the state change data of each virtual object in the virtual interaction scene may be determined according to the operation instruction sent by the at least one client.
In the present specification, the virtual object may be a virtual model of a person, an animal, a stone, a tree, a building, weather, lake water, etc. in the virtual interactive scene, and the state information of the virtual object may be a pose, a color, a physical state, a rendering special effect, etc. of the virtual object, such as freezing, flowing, generating waves, burning, explosion, lightning, etc. of the lake water. Of course, the state information corresponding to other virtual objects may also be included, for example, skill special effects after different skills are released in a scene of the multi-player online game, which is not limited in this specification.
Correspondingly, the state change data corresponding to each virtual object may include state change data of the state change of the virtual object caused by the input operation instruction, for example, the lake water in the virtual interaction scene generates the ripple caused by the change of the state information of the virtual object, and the data corresponding to the process of changing the lake water from the calm state to the state generating the ripple in the virtual interaction scene is the state change data of the lake water.
In addition, the server may determine the state change data corresponding to each virtual object according to the environmental information in the virtual interaction scene, where the environmental information may include time, season, temperature, airflow, etc. in the virtual interaction scene, which is not specifically limited in this specification. For example, due to the time change in the virtual interaction scene, weather in the virtual interaction scene changes, or state change information corresponding to state changes such as the change of the color and plumpness of leaves is generated.
It should be noted that, the server may determine the state change data corresponding to the virtual object only according to the operation instruction, or may determine the state change data corresponding to the virtual object in combination with the operation instruction and the environment information, where the state change data may be generated by an operation instruction corresponding to the current client or may be generated by an operation instruction corresponding to another client.
S104: and generating a global three-dimensional image aiming at the virtual interaction scene according to the state change data.
Specifically, after the server obtains the state change data, according to the state change data, after determining the logic corresponding to the operation instruction by the server, the state information corresponding to each virtual object, for example, when the user controls the virtual object to interact with the water surface in the virtual interaction scene through the client, the water surface is affected by the virtual role behavior, the original static state of the water surface is changed into a state generating ripple, and the state generating ripple is the state information corresponding to the water surface after the server runs the logic corresponding to the operation instruction.
And then the server can generate a global three-dimensional image according to the state information corresponding to each virtual object after the operation instruction corresponds to the logic.
Further, the server may acquire an initial global three-dimensional image, and update initial state information of each virtual object in the initial three-dimensional image according to the state change data, so as to obtain updated state information corresponding to the virtual object. It should be noted that, the initial global three-dimensional image may be a global three-dimensional image generated after the server receives the previous operation instruction at the previous time, and of course, may also be a global three-dimensional image corresponding to the current service when the current service is started.
For example, the initial state of the "tree" in the virtual interaction scene is a normal growth state, and when the server receives an operation instruction for changing the "tree" from the normal growth state to a withered state and generates state change data, the virtual object "tree" is changed from the previous normal growth state to the withered state, and the information corresponding to the withered state is updated state information corresponding to the virtual object. And then the server can update the initial global three-dimensional image according to the updated state information corresponding to each virtual object to obtain an updated global three-dimensional image.
It should be noted that, the virtual interaction scene mentioned in the present specification may be an environment constructed by data in a server and a program language when the global three-dimensional image is not rendered, and after the global three-dimensional image is rendered, a state change of a virtual object in the virtual interaction scene may be reflected in the global three-dimensional image.
S106: and aiming at each client, according to the position and the view angle based on the virtual interaction scene when the client sends the operation instruction, performing image clipping in the global three-dimensional image, and sending the clipped image data to the client for display.
Because different users set different viewing angles and proportions of the pictures in the clients (for example, the users can set different resolutions, viewing angles and viewing fields of the pictures displayed in the clients according to own habits), and the configurations of the different clients are different (for example, the resolutions and sizes of the pictures displayed on the mobile phone are different from those of the pictures displayed on the computer), the positions and viewing angles of the virtual interaction scene based on which the clients send operation instructions are also different.
Therefore, for each client connected with the server, the server can cut the image in the global three-dimensional image according to the position and the view angle based on the virtual interaction scene when the client sends the operation instruction, and send the image data obtained after cutting to the client for displaying.
Specifically, each client corresponds to an image acquisition port in the server, and for each client, the server can acquire images in the global three-dimensional image through the image acquisition port corresponding to the client according to the image acquisition position and the image acquisition range corresponding to the client in the global three-dimensional image.
Further, the server may determine the image cropping window corresponding to the client according to the position and the view angle based on the virtual interaction scene when the client sends the operation instruction. After the server receives the operation instruction sent by the client, if the operation instruction does not change the position and the view angle of the virtual object corresponding to the client in the virtual interaction scene, the server can cut the global three-dimensional image directly through the image cutting window.
If the operation specification causes the position and the view angle of the virtual object corresponding to the client in the virtual interaction scene to be changed (such as an operation instruction of moving, running and the like sent by the client), the server can update the position and the view angle based on the virtual interaction scene when the client sends the operation instruction, so as to obtain an updated position and an updated view angle, and further determine the image cropping window corresponding to the client according to the updated position and the updated view angle.
For example, when the server receives an operation instruction for moving the virtual object forward, the corresponding position and view angle of the client in the virtual interactive scene will also move, and the corresponding image cropping window will also move. When the position of the virtual object in the virtual interactive scene is (0, 0), the image cropping window is [ (-0.5 ), (0.5, 0.5) ] (the first coordinate is the lower left corner of the image cropping window, the second coordinate is the upper right corner of the image cropping window), and when the virtual object moves to (0.5 ), the corresponding image cropping window is correspondingly moved to [ (0, 0), (1, 1) ].
After determining the image clipping window corresponding to each client, the server can respectively clip the image in the global three-dimensional image through the image clipping window corresponding to each client, and send the clipped image to each client so that each client can render and display the received image.
It should be noted that, since the image displayed by each client is cropped in the global three-dimensional image at the same time, if there are two or more clients with the same virtual object or the same scene, the states of the same virtual object or the same scene in the two clients are kept synchronous. In other words, the image states shown in each client are synchronized, and if there are two or more clients with the same virtual object, the states corresponding to the same virtual object are the same.
According to the method, in the process of rendering the images of the clients, the global three-dimensional image is rendered firstly, then the three-dimensional image is cut according to the position and the view angle of each client in the virtual interaction scene, so that the server only needs to perform one-time rendering, the server only needs to perform corresponding image cutting in the rendered global three-dimensional image, and the images required to be rendered and displayed by each client can be obtained.
The above is a method for implementing image rendering in a virtual interactive scene according to one or more embodiments of the present disclosure, and based on the same concept, the present disclosure further provides a corresponding image rendering device in a virtual interactive scene, as shown in fig. 2.
Fig. 2 is a schematic diagram of an image rendering device under a virtual interactive scene provided in the present disclosure, including:
a receiving module 200, configured to receive an operation instruction sent by each client;
the determining module 202 is configured to determine state change data of each virtual object in the virtual interaction scene according to the operation instruction;
a generating module 204, configured to generate a global three-dimensional image for the virtual interactive scene according to the state change data;
and the clipping module 206 is configured to, for each client, clip an image in the global three-dimensional image according to the position and the view angle based on the virtual interaction scene when the client sends the operation instruction, and send the image data obtained after clipping to the client for rendering.
Optionally, the determining module 202 is specifically configured to determine, according to the operation instruction and the environmental information in the virtual interaction scene, state change data of each virtual object in the virtual interaction scene.
Optionally, the generating module 204 is specifically configured to obtain an initial global three-dimensional image, and determine initial state information corresponding to each virtual object in the initial global three-dimensional image; updating the initial state information according to the state change data to obtain updated state information corresponding to each virtual object; and updating the initial global three-dimensional image according to the updated state information to obtain the global three-dimensional image.
Optionally, the cropping module 206 is specifically configured to determine, according to a position and a view angle based on the virtual interaction scene when the client sends the operation instruction, an image cropping window corresponding to the client in the global three-dimensional image; and performing image clipping in the global three-dimensional image through the image clipping window.
Optionally, the clipping module 206 is specifically configured to update, according to an operation instruction sent by the client, a position and a view angle based on the virtual interaction scene when the client sends the operation instruction, so as to obtain an updated position and an updated view angle; and determining an image clipping window corresponding to the client in the global three-dimensional image according to the updated position and the updated view angle.
The present specification also provides a computer-readable storage medium storing a computer program operable to perform a method of image rendering in a virtual interactive scene as provided in fig. 1 above.
The present specification also provides a schematic structural diagram of an electronic device corresponding to fig. 1 shown in fig. 3. At the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile storage, as described in fig. 3, although other hardware required by other services may be included. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to realize the image rendering method under the virtual interaction scene described in the above fig. 1. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present description, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present disclosure and is not intended to limit the disclosure. Various modifications and alterations to this specification will become apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present description, are intended to be included within the scope of the claims of the present description.

Claims (12)

1. An image rendering method under a virtual interactive scene, comprising:
receiving an operation instruction sent by each client;
according to the operation instruction, determining state change data of each virtual object in the virtual interaction scene;
generating a global three-dimensional image aiming at the virtual interaction scene according to the state change data;
and aiming at each client, according to the position and the view angle based on the virtual interaction scene when the client sends the operation instruction, performing image clipping in the global three-dimensional image, and sending the clipped image data to the client for rendering.
2. The method according to claim 1, wherein determining the state change data of each virtual object in the virtual interaction scene according to the operation instruction specifically comprises:
and determining state change data of each virtual object in the virtual interaction scene according to the operation instruction and the environment information in the virtual interaction scene.
3. The method of claim 1, generating a global three-dimensional image for the virtual interactive scene from the state change data, comprising:
acquiring an initial global three-dimensional image, and determining initial state information corresponding to each virtual object in the initial global three-dimensional image;
updating the initial state information according to the state change data to obtain updated state information corresponding to each virtual object;
and updating the initial global three-dimensional image according to the updated state information to obtain the global three-dimensional image.
4. The method of claim 1, for each client, performing image cropping in the global three-dimensional image according to a position and a view angle based on in the virtual interaction scene when the client transmits the operation instruction, specifically comprising:
determining an image cutting window corresponding to the client in the global three-dimensional image according to the position and the view angle based on the virtual interaction scene when the client sends the operation instruction;
and performing image clipping in the global three-dimensional image through the image clipping window.
5. The method of claim 4, wherein determining the image cropping window corresponding to the client in the global three-dimensional image according to the position and the view angle based on the client in the virtual interactive scene when the client sends the operation instruction specifically comprises:
according to the operation instruction sent by the client, updating the position and the view angle based on the virtual interaction scene when the client sends the operation instruction, and obtaining an updated position and an updated view angle;
and determining an image clipping window corresponding to the client in the global three-dimensional image according to the updated position and the updated view angle.
6. An image rendering apparatus in a virtual interactive scene, comprising:
the receiving module receives operation instructions sent by all clients;
the determining module is used for determining state change data of each virtual object in the virtual interaction scene according to the operation instruction;
the generation module is used for generating a global three-dimensional image aiming at the virtual interaction scene according to the state change data;
and the clipping module is used for clipping the image in the global three-dimensional image according to the position and the view angle based on the virtual interaction scene when the client sends the operation instruction, and sending the image data obtained after clipping to the client for rendering.
7. The apparatus of claim 6, wherein the determining module is specifically configured to determine state change data of each virtual object in the virtual interaction scene according to the operation instruction and the environmental information in the virtual interaction scene.
8. The apparatus of claim 6, wherein the generating module is specifically configured to obtain an initial global three-dimensional image, and determine initial state information corresponding to each virtual object in the initial global three-dimensional image; updating the initial state information according to the state change data to obtain updated state information corresponding to each virtual object; and updating the initial global three-dimensional image according to the updated state information to obtain the global three-dimensional image.
9. The apparatus of claim 6, wherein the cropping module is specifically configured to determine an image cropping window corresponding to the client in the global three-dimensional image according to a position and a viewing angle based on the virtual interaction scene when the client sends the operation instruction; and performing image clipping in the global three-dimensional image through the image clipping window.
10. The apparatus of claim 9, wherein the clipping module is specifically configured to update, according to an operation instruction sent by the client, a position and a view angle based on the virtual interaction scene when the client sends the operation instruction, to obtain an updated position and an updated view angle; and determining an image clipping window corresponding to the client in the global three-dimensional image according to the updated position and the updated view angle.
11. A computer readable storage medium storing a computer program which, when executed by a processor, implements the method of any one of the preceding claims 1-5.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any of the preceding claims 1-5 when executing the program.
CN202310122524.7A 2023-01-19 2023-01-19 Image rendering method, device, medium and equipment under virtual interaction scene Pending CN116112742A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310122524.7A CN116112742A (en) 2023-01-19 2023-01-19 Image rendering method, device, medium and equipment under virtual interaction scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310122524.7A CN116112742A (en) 2023-01-19 2023-01-19 Image rendering method, device, medium and equipment under virtual interaction scene

Publications (1)

Publication Number Publication Date
CN116112742A true CN116112742A (en) 2023-05-12

Family

ID=86267096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310122524.7A Pending CN116112742A (en) 2023-01-19 2023-01-19 Image rendering method, device, medium and equipment under virtual interaction scene

Country Status (1)

Country Link
CN (1) CN116112742A (en)

Similar Documents

Publication Publication Date Title
US11380064B2 (en) Augmented reality platform
US10127632B1 (en) Display and update of panoramic image montages
US20220249949A1 (en) Method and apparatus for displaying virtual scene, device, and storage medium
KR20220030263A (en) texture mesh building
US20200312029A1 (en) Augmented and virtual reality
EP3857499A1 (en) Panoramic light field capture, processing and display
CN107784090B (en) File sharing method and device and computer readable medium
CN114531553B (en) Method, device, electronic equipment and storage medium for generating special effect video
CN110971974A (en) Configuration parameter creating method, device, terminal and storage medium
CN116977525B (en) Image rendering method and device, storage medium and electronic equipment
WO2024060949A1 (en) Method and apparatus for augmented reality, device, and storage medium
CN116112742A (en) Image rendering method, device, medium and equipment under virtual interaction scene
CN116245051A (en) Simulation software rendering method and device, storage medium and electronic equipment
CN115695634A (en) Wallpaper display method, electronic equipment and storage medium
CN115311397A (en) Method, apparatus, device and storage medium for image rendering
CN112698882A (en) Page component loading method and device
CN110168601B (en) Image correction method and system by analyzing correction mode
CN116596611A (en) Commodity object information display method and electronic equipment
CN115202792A (en) Method, apparatus, device and storage medium for scene switching
CN117132743A (en) Virtual image processing method and device
CN116664786A (en) Method, device and equipment for realizing three-dimensional digital earth based on Unity engine
CN117011442A (en) Material generation method and device, electronic equipment and storage medium
CN115795201A (en) Method, device and equipment for setting and displaying page based on illusion engine
CN116774902A (en) Virtual camera configuration method, device, equipment and storage medium
CN113838163A (en) Region graph drawing plug-in, method and device, electronic equipment, system and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40089828

Country of ref document: HK