CN117036574A - Rendering method, rendering device, electronic equipment and storage medium - Google Patents

Rendering method, rendering device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117036574A
CN117036574A CN202311014120.2A CN202311014120A CN117036574A CN 117036574 A CN117036574 A CN 117036574A CN 202311014120 A CN202311014120 A CN 202311014120A CN 117036574 A CN117036574 A CN 117036574A
Authority
CN
China
Prior art keywords
image
target
perspective
tile
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311014120.2A
Other languages
Chinese (zh)
Inventor
黄诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202311014120.2A priority Critical patent/CN117036574A/en
Publication of CN117036574A publication Critical patent/CN117036574A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks

Abstract

The disclosure provides a rendering method, a rendering device, electronic equipment, a storage medium and a program product, relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, augmented reality, virtual reality, rendering, cloud processing and the like, and can be applied to scenes such as metauniverse, digital twin, cloud games and the like. The specific implementation scheme is as follows: determining a tile index from a request sent by a terminal device for requesting a perspective rendered image; determining a target perspective rendered image and an associated perspective rendered image that match the tile index; and sending the target perspective rendered image and the associated perspective rendered image to the terminal device.

Description

Rendering method, rendering device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, augmented reality, virtual reality, rendering, cloud processing and the like, and can be applied to scenes such as metauniverse, digital twin, cloud games and the like, and particularly relates to a rendering method, a rendering device, electronic equipment, a storage medium and a program product.
Background
Man-machine interface is the medium and dialogue interface for transferring and exchanging information between man and computer, and is an important component of computer system. The display or the touch screen is an important part of the human-computer interface and is used for displaying physical and chemical information.
The development of rendering technology enables a human-computer interface to present image information in a simulation mode, so that human-computer interaction is simpler and more visual. How to render image information in real time and in a way that is realistic becomes a great challenge for the development of rendering technology.
Disclosure of Invention
The present disclosure provides a rendering method, apparatus, electronic device, storage medium, and program product.
According to an aspect of the present disclosure, there is provided a rendering method including: determining a tile index from a request sent by a terminal device for requesting a perspective rendered image; determining a target perspective rendered image and an associated perspective rendered image that match the tile index; and sending the target perspective rendered image and the associated perspective rendered image to the terminal device.
According to another aspect of the present disclosure, there is provided a rendering method including: transmitting a request to a server for requesting a perspective rendered image, wherein the request includes a tile index; receiving a target perspective rendering image and an associated perspective image which are sent by a server and matched with the tile index; fusing the target far-view rendering image and the near-view rendering image to obtain a target image; and storing the associated perspective image to a target storage space.
According to another aspect of the present disclosure, there is provided a rendering apparatus including: a first determining module, configured to determine a tile index from a request sent by a terminal device for requesting a perspective rendering of an image; the second determining module is used for determining a target distant view rendering image and an associated distant view rendering image which are matched with the tile index; and the first sending module is used for sending the target far-view rendering image and the associated far-view rendering image to the terminal equipment.
According to another aspect of the present disclosure, there is provided a rendering apparatus including: a second sending module, configured to send a request for requesting a perspective rendered image to a server, where the request includes a tile index; the receiving module is used for receiving the target perspective rendering image and the associated perspective image which are sent by the server and matched with the tile index; and the first fusion module is used for fusing the target far-view rendering image and the near-view rendering image to obtain a target image.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods as disclosed herein.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform a method as disclosed herein.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method as disclosed herein.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 schematically illustrates an exemplary system architecture to which rendering methods and apparatus may be applied, according to embodiments of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a rendering method according to an embodiment of the disclosure;
FIG. 3 schematically illustrates a schematic diagram of slicing a model to be rendered according to an embodiment of the disclosure;
FIG. 4 schematically illustrates a flow chart of a verification method according to an embodiment of the disclosure;
FIG. 5 schematically illustrates a flow chart of a rendering method according to another embodiment of the disclosure;
FIG. 6 schematically illustrates a timing diagram of a rendering method according to an embodiment of the disclosure;
FIG. 7 schematically illustrates a block diagram of a rendering device according to an embodiment of the disclosure;
FIG. 8 schematically illustrates a block diagram of a rendering apparatus according to another embodiment of the disclosure; and
fig. 9 schematically illustrates a block diagram of an electronic device adapted to implement a rendering method according to an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the meta-universe world with high immersion and low time delay, the real-time presentation of ultra-high precision scenes is supported by a real-time rendering technology with large calculation amount, and the real-time cloud rendering technology becomes an important component part of the meta-universe infrastructure because the real-time cloud rendering technology can break through the calculation force bottleneck of terminal equipment and provide powerful calculation force for the terminal equipment with insufficient performance to cross into the meta-universe.
The real-time cloud rendering technology is essentially understood that rendering work which is supposed to be completed on the terminal equipment is handed over to the cloud server, audio and video streams which are completed through calculation are compressed, the audio and video streams are transmitted to the terminal equipment through a network, the terminal equipment decodes and displays rendering results, and meanwhile, new user operation instructions are transmitted back to the cloud server, and finally real-time interaction between a user and a virtual world is achieved.
However, in the meta-universe environment, the real-time cloud rendering technology still has the problems of delay, jamming and insufficient image quality. In terms of physical aspects, delay cannot be completely avoided, but uncertainty of network state also causes instability of picture quality, and in the whole real-time cloud rendering cycle, each link of calculation, network transmission, encoding and decoding and the like can influence the final user experience.
The present disclosure provides a rendering method, apparatus, electronic device, storage medium, and program product.
According to an embodiment of the present disclosure, there is provided a rendering method including: determining a tile index from a request sent by a terminal device for requesting a perspective rendered image; determining a target perspective rendered image and an associated perspective rendered image that match the tile index; and sending the target perspective rendered image and the associated perspective rendered image to the terminal device.
By using the rendering method provided by the embodiment of the disclosure, the distant view rendering image and the close view rendering image can be respectively rendered, and better network facilities are constructed, the transmission distance is shortened, the transmission data volume is reduced on the basis of ensuring the image quality by using the computing capability of the cloud rendering technology and the real-time performance of the terminal equipment, so that the image quality and the delay are balanced, and better real-time rendering experience is realized.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing, applying and the like of the personal information of the user all conform to the regulations of related laws and regulations, necessary security measures are adopted, and the public order harmony is not violated.
In the technical scheme of the disclosure, the authorization or consent of the user is obtained before the personal information of the user is obtained or acquired.
Fig. 1 schematically illustrates an exemplary system architecture to which rendering methods and apparatuses may be applied, according to embodiments of the present disclosure.
It should be noted that fig. 1 is only an example of a system architecture to which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios. For example, in another embodiment, an exemplary system architecture to which the rendering method and apparatus may be applied may include a terminal device, but the terminal device may implement the rendering method and apparatus provided by the embodiments of the present disclosure without interacting with a server.
As shown in fig. 1, a system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired and/or wireless communication links, and the like.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications may be installed on the terminal devices 101, 102, 103, such as a knowledge reading class application, a web browser application, a search class application, an instant messaging tool, a mailbox client and/or social platform software, etc. (as examples only).
The terminal devices 101, 102, 103 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (by way of example only) providing support for content browsed by the user using the terminal devices 101, 102, 103. The background management server may analyze and process the received data such as the user request, and feed back the processing result (for example, the target perspective rendered image and the associated perspective rendered image generated according to the user request) to the terminal device.
The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain. "
The terminal devices 101, 102, 103, in response to receiving the target far-view rendered image and the associated far-view rendered image transmitted by the server 105, fuse the target far-view rendered image and the near-view rendered image to obtain a target image. And storing the associated distant view image into a target storage space.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
It should be noted that the sequence numbers of the respective operations in the following methods are merely representative of the operations for the purpose of description, and should not be construed as representing the order of execution of the respective operations. The method need not be performed in the exact order shown unless explicitly stated.
Fig. 2 schematically illustrates a flow chart of a rendering method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S210 to S230.
In operation S210, in response to a request for requesting a perspective rendering image transmitted by the terminal device, a tile index is determined from the request.
In operation S220, a target perspective rendered image and an associated perspective rendered image that match the tile index are determined.
In operation S230, the target perspective rendered image and the associated perspective rendered image are transmitted to the terminal device.
According to an embodiment of the present disclosure, the rendering method shown in fig. 2 may be applied to a server, such as a cloud server.
According to embodiments of the present disclosure, a server may determine a tile index from a request sent by a terminal device for requesting a perspective rendering of an image. The tile index may include: an index for indexing the perspective rendered image generated based on the map tile locations.
According to an embodiment of the present disclosure, in a metauniverse or digital twin scenario, in case a target object enters a map tile location, a terminal device generates a tile index and sends a request containing the tile index to a server.
According to an embodiment of the present disclosure, a target perspective rendered image matching a tile index may include: rendered scene model images of map tile locations. Map tile locations are indicated by tile indexes.
According to an embodiment of the present disclosure, an associated perspective rendered image that matches a tile index may include: a rendered scene model image of an associated map tile location associated with the map tile location. Associating map tile locations with map tile locations may refer to: the tiles where the map tile locations are associated are adjacent to the tiles where the map tile locations are located. But is not limited thereto. It can also be referred to as: the tile in which the associated map tile position is located is in the same scene as the tile in which the map tile position is located. Or the distance between the associated map tile position and the map tile position is less than a predetermined separation threshold.
According to an embodiment of the present disclosure, transmitting a target perspective rendered image and an associated perspective rendered image to a terminal device may include: transmitting the target perspective rendered image and the associated perspective rendered image to the terminal device, but is not limited thereto, and may further include: and sending the target distant view rendering image associated with the target label and the associated distant view rendering image associated with the associated label to the terminal equipment.
According to embodiments of the present disclosure, a terminal device invokes rendering hardware, such as a GPU (Graphics Processing Unit, graphics processor), to render a close-range model and associated texture in real-time, generating a close-range rendered image. Under the condition that the terminal equipment receives the target far-view rendering image sent by the server, the target far-view rendering image and the near-view rendering image are fused to obtain a target image. And displaying the target image on a display interface of the terminal device. The remote scene model with high computing power demand is rendered by the server, and the near scene model is rendered by fully utilizing rendering hardware of the terminal equipment, so that the end cloud rendering experience with high performance and low delay is realized.
According to other embodiments of the present disclosure, the server may directly send the target far-view rendered image to the terminal device, where the terminal device fuses the target far-view rendered image and the near-view rendered image to obtain the target image. And displaying the target image on a display interface of the terminal device.
According to the embodiment of the disclosure, the associated distant view rendering image is sent to the terminal device in advance, and whether the updated distant view rendering image exists in the associated distant view rendering image stored in the terminal device can be determined under the condition that the target distant view rendering image needs to be updated. In the case where it is determined that there is an updated perspective rendering image in the associated perspective rendering images stored in the terminal device, the target perspective rendering image may be updated with the updated perspective rendering image. And fusing the updated distant view rendering image and the close view rendering image to obtain an updated target image. In the event that it is determined that no updated perspective rendered image exists in the associated perspective rendered images stored in the terminal device, a second request is generated based on the current tile position of the target object. The second request is sent to the server.
According to the embodiment of the disclosure, compared with a mode of directly transmitting the target far-reaching image to the terminal equipment, the method of transmitting the target far-reaching image and the associated far-reaching image to the terminal equipment is adopted, the association relation between the target far-reaching image and the associated far-reaching image can be fully utilized, the associated far-reaching image is transmitted to the terminal equipment, the fact that the updated target far-reaching image can be conveniently determined in the associated far-reaching image locally stored in the terminal equipment under the condition that the target far-reaching image is updated is ensured, and therefore the updated target far-reaching image is used for replacing the target far-reaching image to be updated, the interaction times are reduced, the data transmission cost is saved while the high-performance low-delay terminal cloud rendering experience is ensured, and the processing efficiency is improved.
According to an embodiment of the present disclosure, the rendering method as shown in fig. 2 may be applied to a scene such as metauniverse (Metaverse), digital Twin (Digital Twin), cloud game, etc.
According to embodiments of the present disclosure, a metauniverse may refer to a conceptual digital world, and Virtual worlds in the metauniverse may be implemented using Virtual Reality (VR) technology. A virtual digital person, which may also be referred to as a virtual person, may refer to an entity that does not exist in the physical world, and may be a virtual product in the meta-universe.
The metauniverse and/or the virtual digital person may be rendered in advance in the server, generating a perspective rendered image associated with the metauniverse and/or the virtual digital person. The target object may refer to one of a plurality of virtual digital persons in the universe, and may be rendered in the terminal device to generate a close-range rendering image. Rendering the metauniverse and other virtual digital persons except for the target object in the plurality of virtual digital persons in the server to generate a perspective rendering image. And determining a target perspective rendering image and an associated perspective rendering image based on the tile index, and sending the target perspective rendering image and the associated perspective rendering image to the terminal equipment. In the terminal equipment, the close-range rendering image and the target distant-range rendering image are fused, and a target image is generated. And displaying the target image on a display interface of the terminal equipment. In the process of moving the target object, part of scenes in the metauniverse can be converted along with the movement of the target object, and the tile index can be updated based on the position of the target object. And determining a target perspective rendering image from the associated perspective rendering images stored in the terminal device based on the updated tile index. And obtaining an updated target image based on the target far-view rendering image and the near-view rendering image. And displaying the updated target image in the terminal equipment.
According to embodiments of the present disclosure, digital twins, also known as digital twins or digitized maps, may refer to a digital model of a physical product in real space in virtual space, containing product information from product conception to full life cycle of product return.
The virtual space and/or the digital model may be rendered in advance in the server, generating a perspective rendered image associated with the virtual space and/or with the digital model. The target object may refer to a portion of a region of the virtual space or one of a plurality of digital models. The server determines a tile index from the request in response to a request sent by the terminal device for requesting a perspective rendered image with respect to the target object. And determining a target perspective rendering image and an associated perspective rendering image matched with the tile index, and sending the target perspective rendering image and the associated perspective rendering image to the terminal equipment. And the terminal equipment fuses the target far-view rendering image and the near-view rendering image to obtain a target image. Based on the target image and the acquired operation data with sensor feedback about the physical product, digital twinning is generated and displayed on a display interface of the terminal device. Under the condition that the digital model to be added exists, a target far-view rendering image can be determined from the associated far-view rendering images stored in the terminal equipment based on the tile index of the digital model to be added, and the terminal equipment fuses the near-view rendering image of the digital model to be added with the target far-view rendering image to obtain an updated target image. And displaying the updated target image on a display interface of the terminal equipment so as to improve the processing efficiency.
According to embodiments of the present disclosure, cloud gaming may refer to gaming based on cloud computing. Can be applied to cloud servers. And rendering the game scene in the cloud server in advance to generate a perspective rendering image. In the running mode of the cloud game, the target object may refer to a virtual character in the cloud game, the terminal device sends a request to the cloud server, and the cloud server may send the target perspective rendering image and the associated perspective rendering image to the terminal device based on the tile index. And the terminal equipment fuses the close-range rendering image of the target object and the target distant-range rendering image of the virtual space where the target object is positioned to obtain a target image. Under the condition that the position of the target object is transformed, the target far-view rendering image can be directly determined from the associated far-view rendering image stored in the terminal equipment based on the tile index, and the terminal equipment fuses the near-view rendering image of the target object and the target far-view rendering image of the virtual space where the target object is located to obtain an updated target image. And displaying the updated target image on a display interface of the terminal equipment so as to improve the processing efficiency.
In accordance with an embodiment of the present disclosure, before operation S210 shown in fig. 2, the rendering method may further include the operations of: and cutting the model to be rendered to obtain a plurality of sub-models to be rendered. And determining a perspective sub-model to be rendered from the sub-models to be rendered. Rendering the long-range view sub-model to be rendered to obtain a long-range view rendering image.
According to an embodiment of the present disclosure, segmenting a model to be rendered to obtain a plurality of sub-models to be rendered may include: and cutting the model to be rendered according to the cutting mode of the map tiles to obtain a plurality of sub-models to be rendered.
According to embodiments of the present disclosure, the model to be rendered may refer to a native model that is configured within the virtual scene. Such as building models, character models, object models, and the like.
According to an embodiment of the present disclosure, determining a perspective sub-model to be rendered from sub-models to be rendered may include: and performing distance and near view separation on the near view sub-model to be rendered and the distance view sub-model to be rendered in the sub-model to be rendered to obtain the near view sub-model to be rendered and the distance view sub-model to be rendered.
According to the embodiment of the disclosure, the relevant data of the near-view sub-model to be rendered can be sent to the terminal equipment, so that the terminal equipment renders the near-view sub-model to be rendered in real time. And rendering the long-range view model to be rendered by the server.
According to the embodiments of the present disclosure, the manner of performing the distance-near separation is not limited, and for example, a line-of-sight calibration method, a still calibration method, or a complex model priority calibration method may be employed. As long as the model suitable for rendering at the server can be identified from the sub-model to be rendered.
According to other embodiments of the present disclosure, before operation S210 shown in fig. 2 is directed, the rendering method may further include the operations of: and determining a perspective model to be rendered from the models to be rendered. And cutting the perspective model to be rendered according to a cutting mode of the map tiles to obtain a plurality of perspective sub-models to be rendered. Rendering the long-range view sub-model to be rendered to obtain a long-range view rendering image.
According to the embodiment of the disclosure, compared with a method for directly segmenting a perspective model to be rendered, the method for segmenting the perspective model to be rendered can directly segment a near perspective model to be rendered and a perspective model to be rendered in the same way, so that the segmentation effect is good, the follow-up fusion reality is high, and the problem of poor fusion effect caused by different segmentation modes or segmentation standards is avoided.
According to an embodiment of the present disclosure, according to a segmentation method of map tiles, a model to be rendered is segmented to obtain a plurality of sub-models to be rendered, which may include: the segmentation mode of the map tiles can be determined through a plurality of segmentation parameters. The model to be rendered is cut into pieces Zhang Wapian. The segmentation parameters may include, for example, the type of geographic coordinate system used, the number of levels of segmentation, the segmentation scope, the number of rows of segmented tiles per level, the pixel size of each tile, etc. But is not limited thereto. The cut parameters may also include geographic information such as altitude, grade, etc.
Fig. 3 schematically illustrates a schematic diagram of slicing a model to be rendered according to an embodiment of the disclosure.
As shown in fig. 310, the segmentation may be performed in a segmentation manner of the map tile pyramid model. The tile map pyramid model is effectively a quadtree structure model. The number of nodes of each level of the pyramid is 2 2n The scaling level number n, counted from 0, is 1 node in the 0 th level node 311, and can be regarded as the root node. Level 1 node 312 is 2 1×2 The number of nodes is 4. Level 2 node 313 is 2 2×2 The maximum number of zoom levels N-1 is 16 nodes, … …, and so on.
As shown in fig. 320, the maximum zoom level number N may be determined, and the map picture with the lowest zoom level and the largest map scale may be used as the bottom layer of the pyramid, i.e., level 0. And determining the number of pixels at the current level according to the span and partitioning the pixels. For example, a level 0 tile matrix 321 may be formed by starting from the top left corner of a map picture, splitting from left to right, and top to bottom, into square map tiles of the same size (e.g., 256×256 pixels).
On the basis of the level 0 map picture, a level 1 map picture is generated by synthesizing one pixel every 2×2 pixels, and is segmented. Cut into square map tiles of the same size to form a level 1 tile matrix 322.
The same method is used to generate a level 2 tile matrix 323, … …, and so on until level N-1, forming a tile map pyramid model.
According to the embodiment of the disclosure, the above modes are all modes regularly divided according to the map scale and the display resolution. In the case of level 0, each tile picture may display 1/4 area in the case of level 1, and each tile picture may display 1/16 area in the case of level 2, assuming that one picture can see the entire area.
According to an embodiment of the present disclosure, rendering a perspective sub-model to be rendered to obtain a perspective rendered image may include: rendering the long-range view sub-model to be rendered, and identifying tile identifiers to obtain a long-range view rendering image.
According to embodiments of the present disclosure, the tile identifier may include an identity identifier (Identity Document, ID) of the tile, and the type of the tile identifier is not limited as long as the tile can be uniquely identified.
According to embodiments of the present disclosure, the description of tile identification may be consistent with the description of tile index. For example, tile identification may sequentially include: scene information, tile hierarchy, row coordinates of tiles, and column coordinates of tiles. The tile index may include, in order: scene information, tile hierarchy, row coordinates of tiles, and column coordinates of tiles. But is not limited thereto. The tile identification may further comprise, in order: scene information, geographic coordinate system type, tile hierarchy, row coordinates of tiles, and column coordinates of tiles. The tile index may further comprise, in order: scene information, geographic coordinate system type, tile hierarchy, row coordinates of tiles, and column coordinates of tiles.
According to an embodiment of the present disclosure, a target perspective rendering image determined based on a tile index may be verified based on the tile index and the tile identification, and the target perspective rendering image determined based on the tile index is determined to be a correct rendering image if it is determined that the tile identification of the target perspective rendering image matches the tile index.
According to an embodiment of the present disclosure, the rendering method may further include the operations of: a cluster is determined based on the plurality of perspective rendered images of the same scene.
According to embodiments of the present disclosure, a scene may be an area divided according to an environment range, an environment type, and the like. For example, a scene may refer to a game scene, a concert hall scene, an outdoor scene, and so on. Different rooms in a game scene may also be referred to as different scenes.
According to embodiments of the present disclosure, multiple perspective rendered images of the same scene may refer to: and rendering images with multiple long-range views corresponding to multiple areas of the same scene one by one. But is not limited thereto. And the method can also refer to a plurality of perspective rendering images which are in one-to-one correspondence with a plurality of different moments in the same scene. Such as a video stream composed of perspective rendered images at different respective times.
According to other embodiments of the present disclosure, the rendering method may further include the operations of: a cluster is determined based on a plurality of perspective rendered images of different scenes.
According to an embodiment of the present disclosure, the plurality of perspective rendered images of different scenes may be one of which is adjacent to at least one other perspective rendered image of the plurality of perspective rendered images, but is not limited thereto, and may be spaced apart from one of which is less than or equal to a predetermined spacing threshold from at least one other perspective rendered image of the plurality of perspective rendered images.
According to the embodiment of the disclosure, a plurality of distant view rendering images are taken as one cluster, and can be combined with reality, for example, the reality that a target object in a scene such as a character model moves is utilized to have relevance, so that the plurality of distant view rendering images related in geographic positions can be reasonably combined, and the plurality of distant view rendering images are taken as one cluster, so that the related distant view rendering images related to the target distant view rendering images can be conveniently determined from the cluster, and the processing efficiency is improved.
In accordance with an embodiment of the present disclosure, before performing operation S210 as shown in fig. 2, the rendering method may further include the operations of: and determining a storage path of the cluster based on the tile identification of the perspective rendered image. And storing the cluster according to the storage path.
According to embodiments of the present disclosure, multiple perspective rendered images in a cluster may be stored to multiple storage subspaces in the same storage space. Determining a storage path for the cluster based on tile identification of the perspective rendered image may include: and mapping the tile identifiers and the storage paths one by one to generate a mapping relation. And determining the storage paths of the multiple perspective rendering images in the cluster based on the tile identifications of the multiple perspective rendering images. But is not limited thereto. Multiple perspective rendered images in a cluster may be stored to the same storage space. Determining a storage path of the cluster based on the tile identification of the perspective rendered image may further include: mapping the tile identifiers with a storage path to generate a mapping relation. Based on the tile identifications of the multiple perspective rendering images, the same storage path of the multiple perspective rendering images in the cluster is determined.
According to the embodiment of the disclosure, the long-range sub-model to be rendered can be rendered in the server to obtain a long-range rendering image. Clusters of clusters comprising a plurality of perspective rendered images are stored at an edge CDN (Content Delivery Network ).
According to the embodiment of the disclosure, the storage path of the cluster is determined based on the tile identification of the perspective rendering image, and the cluster is stored according to the storage path, so that the storage scientificity of the perspective rendering image can be improved by reasonably dividing the storage space in the storage process, and further the search efficiency and the search accuracy can be improved based on the tile index.
According to an embodiment of the present disclosure, determining a target perspective rendered image and an associated perspective rendered image that match the tile index for operation S220 as shown in fig. 2 may include the following operations.
For example, a target perspective rendered image that matches the tile index is determined. And determining a cluster of the target perspective rendering image. Based on the cluster, an associated perspective rendered image is determined.
According to an embodiment of the present disclosure, the scene in which the cluster is located is the same as the scene in which the target perspective rendered image is located. Determining a cluster of target perspective rendered images may include: and determining a cluster of the target perspective rendering image based on scene identification information in the target perspective rendering image.
According to embodiments of the present disclosure, scene identification information may be added in advance in tile identifications. And determining a cluster of the target perspective rendering image based on scene identification information of tile identifications of the target perspective rendering image.
According to an embodiment of the present disclosure, a target perspective rendered image is determined based on a tile index. A cluster is determined based on the target perspective rendered image. The associated distant view rendering image is determined from the cluster, so that the cluster can be utilized to reduce the screening range of the associated distant view rendering image, and the processing efficiency is improved.
According to an embodiment of the present disclosure, the request for requesting the perspective rendering of the image in operation S210 as shown in fig. 2 may further include an indexing algorithm identification.
According to embodiments of the present disclosure, determining a target perspective rendered image that matches the tile index may include the following operations.
For example, a target indexing algorithm is determined based on the indexing algorithm identification. And determining a storage path of the target perspective rendered image based on the target index algorithm and the tile index. Based on the stored path, a target perspective rendered image is determined.
According to embodiments of the present disclosure, an indexing algorithm identification may be added to the request, but is not limited thereto, and an indexing algorithm identification may also be added to the tile index. The target indexing algorithm may be a storage location algorithm for determining a storage path. As an indexing algorithm, for example, a general tile path setting method built in the server may be employed. The indexing algorithm can be generally applied to segmentation of the perspective model to be rendered by using a general tile segmentation tool to obtain a plurality of perspective sub-models to be rendered. But is not limited thereto. A user-defined storage location algorithm may also be employed as an indexing algorithm. The indexing algorithm can be suitable for segmenting the perspective model to be rendered by using a user-defined tile segmentation tool to obtain a plurality of perspective sub-models to be rendered.
According to an embodiment of the present disclosure, determining a storage path for a target perspective rendered image based on a target index algorithm and a tile index may include: and converting the tile index by using a target index algorithm to generate a tile index matched with the tile identifier.
According to an embodiment of the present disclosure, converting a tile index using a target index algorithm to generate a tile index matching a tile identification may include: a determination is made as to whether the tile index matches the format type of the tile identification. And under the condition that the format type of the tile index is not matched with the format type of the tile identifier, converting the tile index by utilizing a target index algorithm to generate the tile index matched with the tile identifier. In the event that a format type of the tile index is determined to match a format type of the tile identifier, the tile index is taken as the tile index that matches the tile identifier.
For example, the tile index sequentially includes: scene information, tile hierarchy, row coordinates of tiles, and column coordinates of tiles. The tile identification comprises the following components in sequence: tile hierarchy, scene information, row coordinates of tiles, and column coordinates of tiles. The parameters in the tile index may be sequentially adjusted using a target index algorithm so that the ordering of the parameters is the same as in the tile identification. Obtaining a tile index that matches the tile identification includes: tile hierarchy, scene information, row coordinates of tiles, and column coordinates of tiles.
Also for example, the tile index sequentially includes: the tile hierarchy, scene information, a second type of row coordinates of the tile, and a second type of column coordinates of the tile. The tile identification comprises the following components in sequence: the tile hierarchy, scene information, first type row coordinates of the tile, and first type column coordinates of the tile. The parameters in the tile index may be format type converted using a target index algorithm so that the format type is the same as the parameters in the tile identity. Obtaining a tile index that matches the tile identification includes: the tile hierarchy, scene information, first type row coordinates of the tile, and first type column coordinates of the tile.
According to an embodiment of the present disclosure, a target tile identity is determined from a plurality of tile identities based on a tile index that matches the tile identity. And determining a storage path of the target perspective rendered image based on the target tile identification and the mapping relation. The mapping relation is pre-established in the process of generating the perspective rendering image. The mapping relationship characterizes a mapping relationship between tile identifications and storage paths.
According to embodiments of the present disclosure, the storage path of the target perspective rendered image may refer to a storage path that indexes the server storage space in which the target perspective rendered image is located. Based on the storage path, a server storage space for the target perspective image is determined. A target perspective image is determined from the server storage space based on the tile index.
According to the embodiment of the disclosure, the index method can be utilized to quickly and flexibly determine the target perspective rendered image, and meanwhile, the target index algorithm is utilized to convert the tile index, so that the application range of the tile index is improved, further, different types of terminal equipment can be dealt with, and the user experience is improved.
According to an embodiment of the present disclosure, determining an associated perspective rendered image based on a cluster includes: and taking the distant view rendering image adjacent to the target distant view rendering image position in the cluster as the associated distant view rendering image.
According to an embodiment of the present disclosure, a perspective rendered image adjacent to a target perspective rendered image position in a cluster may refer to: a perspective rendered image adjacent to the target perspective rendered image location and/or a perspective rendered image spaced from the target perspective rendered image location by less than a predetermined spacing threshold.
According to an embodiment of the present disclosure, determining an associated perspective rendered image based on the cluster may further include: and taking a plurality of perspective rendering images which are positioned at the same position and at different moments with the target perspective rendering image in the cluster as associated perspective rendering images. Different times may refer to different ones of the future times.
According to the embodiment of the disclosure, the associated distant view rendering image associated with the target distant view rendering image can be determined from the cluster, the screening range is reduced through the cluster, and the screening efficiency and the determining precision of the associated distant view rendering image are improved.
According to an embodiment of the present disclosure, for operation S210 as shown in fig. 2, determining the tile index from the request may include the following operations: the security of the request is verified.
Fig. 4 schematically illustrates a flow chart of a verification method according to an embodiment of the disclosure.
As shown in fig. 4, the method 410 includes operations S411 to S414.
In operation S411, an initial tile index is determined from the request.
In operation S412, it is determined whether the initial tile index satisfies a predetermined model division manner. In the case where it is determined that the initial tile index satisfies the predetermined model division manner, operation S413 is performed, and in the case where it is determined that the initial tile index does not satisfy the predetermined model division manner, operation S414 is performed.
In operation S413, the initial tile index is taken as the tile index.
In operation S414, the operation is stopped.
According to an embodiment of the present disclosure, the initial tile index includes a tile hierarchy and tile coordinates. But is not limited to, the initial tile index may also include tile hierarchy, scene information, and tile coordinates.
According to an embodiment of the present disclosure, for operation S412 as shown in fig. 4, determining whether the initial tile index satisfies the predetermined model partitioning manner may include: a tile coordinate range is determined that matches the tile hierarchy. Based on the tile coordinates and the tile coordinate ranges, it is determined whether the initial tile index satisfies a predetermined model partitioning manner.
According to the embodiment of the disclosure, the predetermined model dividing manner may include a splitting manner of the map tile, but is not limited thereto, and may be combined with other fine tuning manners of the map tile based on the splitting manner including the map tile. As long as the tile coordinate range of each map tile can be determined based on a predetermined model division manner. Taking a square map tile as an example, the tile coordinate range may include tile coordinates for each of the opposing two vertices, e.g., upper left vertex A and lower right vertex B. Such as row and column coordinates of vertex a and row and column coordinates of vertex B.
According to an embodiment of the present disclosure, determining whether the initial tile index satisfies a predetermined model partitioning manner based on the tile coordinates and the tile coordinate range may include: it may be determined whether the tile coordinates are within the tile coordinate range based on the tile coordinates and the tile coordinate range. In the case that the tile coordinates are determined to be within the tile coordinate range, it is determined that the initial tile index satisfies a predetermined model partitioning manner. And determining that the initial tile index part meets a preset model dividing mode under the condition that the tile coordinates are determined to be out of the tile coordinate range.
According to the embodiment of the disclosure, based on the tile index, the request is checked, whether the request meets a predetermined model division mode or not can be determined, and whether the request is out of range or not is determined. Thereby improving data security and confidentiality during interaction.
According to an embodiment of the present disclosure, for operation S230 as shown in fig. 2, transmitting the target perspective rendered image and the associated perspective rendered image to the terminal device may include the following operations.
For example, the target perspective rendered image is associated with a target tag that identifies the target image. The associated perspective rendered image is associated with an associated tag that identifies the associated image. And sending the target distant view rendering image associated with the target tag and the associated distant view rendering image associated with the associated tag to the terminal device.
According to the embodiment of the present disclosure, the identification types of the target tag and the associated tag are not limited as long as the target tag can be used to identify the target perspective rendered image as a target image, and the associated tag can be used to identify the associated perspective rendered image as an associated image.
According to an embodiment of the present disclosure, a target perspective rendered image is associated with a target tag for identifying the target image. The associated perspective rendered image is associated with an associated tag that identifies the associated image. The method and the device can quickly and accurately identify the obtained multiple images under the condition that the terminal equipment receives the target far-reaching image and the associated far-reaching image, so as to obtain the target far-reaching image and the associated far-reaching image.
Fig. 5 schematically illustrates a flow chart of a rendering method according to another embodiment of the present disclosure.
As shown in FIG. 5, the method includes operations S510-S530.
In operation S510, a request for requesting a perspective rendered image is transmitted to a server. The request includes a tile index.
In operation S520, a target perspective rendered image and an associated perspective image, which are matched with the tile index, transmitted by the server are received.
In operation S530, the target far view rendered image and the near view rendered image are fused to obtain a target image.
According to an embodiment of the present disclosure, the rendering method as shown in fig. 5 may be applied to a terminal device.
According to an embodiment of the present disclosure, receiving a target perspective rendered image and an associated perspective image that are matched to a tile index, which are transmitted by a server, may include: and receiving the target perspective rendered image and the associated perspective rendered image sent by the server. But is not limited thereto. May further include: and receiving the target distant view rendering image which is transmitted by the server and is associated with the target label and the associated distant view rendering image which is associated with the associated label.
According to the embodiment of the disclosure, the target far-reaching image which is sent by the server and is associated with the target label and the associated far-reaching image which is associated with the associated label are received, so that the terminal equipment can conveniently and rapidly and accurately identify a plurality of obtained images, and the target far-reaching image and the associated far-reaching image are obtained.
According to embodiments of the present disclosure, the terminal device may invoke rendering hardware, such as a GPU (Graphics Processing Unit, graphics processor), to render the near-view model and associated texture in real-time, generating a near-view rendered image. Under the condition that the terminal equipment receives the target far-view rendering image sent by the server, the target far-view rendering image and the near-view rendering image are fused to obtain a target image. And displaying the target image on a display interface of the terminal device. The remote scene model with high computing power demand is rendered by the server, and the near scene model is rendered by fully utilizing rendering hardware of the terminal equipment, so that the end cloud rendering experience with high performance and low delay is realized.
According to an embodiment of the present disclosure, after operation S530 for as shown in fig. 5, the rendering method may further include operation S540 of storing the associated perspective image to the target storage space.
According to other embodiments of the present disclosure, the server may directly send the target far-view rendered image to the terminal device, where the terminal device fuses the target far-view rendered image and the near-view rendered image to obtain the target image. And displaying the target image on a display interface of the terminal device.
Compared with the mode that the terminal equipment directly receives the target far-reaching image, the method for receiving the target far-reaching image and the associated far-reaching image can determine whether the updated far-reaching image exists in the associated far-reaching image stored in the target storage space of the terminal equipment under the condition that the target far-reaching image needs to be updated. In the event that it is determined that an updated perspective rendered image exists in the associated perspective rendered image, the target perspective rendered image may be updated with the updated perspective rendered image. And fusing the updated distant view rendering image and the close view rendering image to obtain an updated target image. In the event that it is determined that no updated perspective rendered image exists in the associated perspective rendered images stored in the terminal device, a second request is generated based on the current tile position. The second request is sent to the server.
According to the embodiment of the disclosure, the target far-view rendering image and the near-view rendering image are fused to obtain the target image, so that the association relationship between the target far-view rendering image and the near-view rendering image can be fully utilized, the cloud rendering experience of the terminal with high performance and low delay is ensured, the data transmission cost is saved, and the processing efficiency is improved.
In accordance with an embodiment of the present disclosure, before operation S510 as shown in fig. 5, the rendering method may further include the operations of: in response to an instruction to render a target location, a determination is made from the target storage space whether a target perspective rendered image is present based on the tile index. In the event that it is determined that the target perspective rendered image does not exist in the target storage space, a request for requesting the perspective rendered image is generated based on the tile index.
According to the embodiments of the present disclosure, a target storage space is set in a terminal device, and a history associated distant view rendering image associated with a history target distant view rendering image is stored to the target storage space, as an alternative image, it is possible to determine in advance whether or not there is a target distant view rendering image from the target storage space in the case where an instruction for rendering a target position issued by a user, for example, is obtained. And under the condition that the target far-reaching image exists in the target storage space, acquiring the target far-reaching image from the target storage space. And fusing the target far-view rendering image and the near-view rendering image to obtain a target image. In the event that it is determined that the target perspective rendered image does not exist in the target storage space, a request for requesting the perspective rendered image is generated based on the tile index.
By utilizing the rendering method provided by the embodiment of the disclosure, the image rendering efficiency can be improved, the delay is reduced, meanwhile, the hardware equipment of the terminal equipment is reasonably utilized, the interaction performance is improved, and the interaction cost is reduced.
In accordance with an embodiment of the present disclosure, before operation S510 as shown in fig. 5, the rendering method may further include the operations of: and performing coordinate conversion on the target position to generate a tile index. Based on the tile index, a request is generated for requesting a perspective rendered image. But is not limited thereto. A request for requesting a perspective rendered image may also be generated based on the tile index and the index positioning algorithm.
According to embodiments of the present disclosure, a target location may refer to a location in a scene where a target object, such as a person's close-up model, is located.
According to the embodiment of the disclosure, coordinate conversion is performed on the target position, which means that the coordinate expression format of the target position is converted into the coordinate expression format of the map tile position, so that the generated tile index is consistent with the expression mode of the tile identifier, the query and the matching are easy, and the accuracy and the efficiency of determining the target perspective rendering image based on the tile index are improved.
According to the embodiment of the disclosure, the tile index is added in the request or the tile index and the index positioning algorithm are added in the request, so that the interaction information of the terminal equipment is sufficient and rich in the process of interacting with the server, the interaction response rate is further improved, and the processing efficiency and the user experience are improved.
Fig. 6 schematically illustrates a timing diagram of a rendering method according to an embodiment of the present disclosure.
As shown in fig. 6, the method includes operations S601 to S630.
In operation S601, the server segments the model to be rendered to obtain a plurality of sub-models to be rendered.
In operation S602, the server separates the sub-models to be rendered, and determines a perspective sub-model to be rendered from among the sub-models to be rendered.
In operation S603, the server performs tile identification on the perspective sub-model to be rendered.
In operation S604, the perspective sub-model to be rendered is rendered, resulting in a perspective rendered image.
In operation S605, a cluster is stored according to a storage path.
In operation S606, the terminal device transmits a request for requesting a target location, e.g., a current location, to the server.
In operation S607, the terminal device receives the tile index of the current location transmitted by the server.
In operation S608, the terminal device generates a request for requesting a storage path based on the tile index.
In operation S609, the terminal device transmits a request for requesting a storage path to the server.
In operation S610, the terminal device receives a storage path transmitted by the server.
In operation S611, the terminal device generates a request for requesting a perspective rendering image based on the storage path.
In operation S612, the terminal device transmits a request for requesting a perspective rendered image to the server.
In operation S613, the terminal device receives the target distant view image and the associated distant view image transmitted by the server. For example receiving video stream data.
In operation S614, the target far view rendered image and the near view rendered image are fused to obtain a target image.
Fig. 7 schematically illustrates a block diagram of a rendering apparatus according to an embodiment of the present disclosure.
As shown in fig. 7, the rendering apparatus 700 includes a first determination module 710, a second determination module 720, and a first transmission module 730.
A first determining module 710 is configured to determine a tile index from a request sent by a terminal device for requesting a perspective rendering of an image. In an embodiment, the first determining module 710 may be configured to perform the operation S210 described above, which is not described herein.
A second determining module 720 is configured to determine a target perspective rendered image and an associated perspective rendered image that match the tile index. In an embodiment, the second determining module 720 may be configured to perform the operation S220 described above, which is not described herein.
A first sending module 730, configured to send the target perspective rendered image and the associated perspective rendered image to the terminal device. In an embodiment, the first sending module 730 may be configured to perform the operation S230 described above, which is not described herein.
According to an embodiment of the present disclosure, the second determination module 720 includes a first determination sub-module, a second determination sub-module, and a third determination sub-module.
A first determination submodule is used for determining a target perspective rendering image matched with the tile index.
And the second determining submodule is used for determining a cluster of the target long-range view rendering image. The scene where the cluster is located is the same as the scene where the target distant view rendering image is located.
And the third determination submodule is used for determining the associated perspective rendering image based on the cluster.
According to an embodiment of the present disclosure, the request further includes an indexing algorithm identification.
According to an embodiment of the present disclosure, the first determination submodule includes a first determination unit, a second determination unit, and a third determination unit.
And the first determining unit is used for determining a target index algorithm based on the index algorithm identification.
And the second determining unit is used for determining a storage path of the target long-range view rendering image based on the target index algorithm and the tile index.
And a third determining unit for determining a target perspective rendered image based on the storage path.
According to an embodiment of the present disclosure, the third determination submodule includes a fourth determination unit.
And the fourth determining unit is used for taking the rendering image adjacent to the target distant view rendering image position in the cluster as the associated distant view rendering image.
According to an embodiment of the present disclosure, the second determining module 720 further includes a sub-module for cutting, a sub-module for rendering, and a fourth determining sub-module.
And the segmentation module is used for segmenting the to-be-rendered distant model according to the segmentation mode of the map tiles to obtain a plurality of to-be-rendered distant sub-models.
The rendering sub-module is used for rendering the long-range sub-model to be rendered, identifying tile identifiers and obtaining a long-range rendering image.
And the fourth determining submodule is used for determining a cluster based on a plurality of perspective rendering images with the same scene.
According to an embodiment of the present disclosure, the second determination module 720 further includes a fifth determination sub-module and a storage sub-module.
And a fifth determining submodule, configured to determine a storage path of the cluster based on the tile identifier of the perspective rendering image.
And the storage sub-module is used for storing the cluster according to the storage path.
According to an embodiment of the present disclosure, the first transmitting module 730 includes a first association sub-module, a second association sub-module, and a transmitting sub-module.
And the first association sub-module is used for associating the target perspective rendering image with a target label for identifying the target image.
And the second association sub-module is used for associating the associated perspective rendering image with an association tag for identifying the associated image.
And the sending sub-module is used for sending the target distant view rendering image associated with the target label and the associated distant view rendering image associated with the associated label to the terminal equipment.
According to an embodiment of the present disclosure, the first determination module 710 includes a sixth determination sub-module and a seventh determination sub-module.
A sixth determination submodule determines an initial tile index from the request.
And a seventh determining submodule, configured to take the initial tile index as the tile index if it is determined that the initial tile index meets the predetermined model partitioning manner.
According to an embodiment of the present disclosure, the initial tile index includes a tile hierarchy and tile coordinates.
According to an embodiment of the present disclosure, the rendering apparatus 700 further includes a third determination module and a fourth determination module.
And a third determining module, configured to determine a tile coordinate range matched with the tile hierarchy.
And a fourth determining module, configured to determine whether the initial tile index meets a predetermined model partitioning manner based on the tile coordinates and the tile coordinate range.
Fig. 8 schematically illustrates a block diagram of a rendering apparatus according to another embodiment of the present disclosure.
As shown in fig. 8, the rendering apparatus 800 of this embodiment includes a second transmitting module 810, a receiving module 820, and a first fusing module 830.
A second sending module 810 is configured to send a request to the server for requesting a perspective rendered image, where the request includes a tile index. In an embodiment, the second sending module 810 may be configured to perform the operation S510 described above, which is not described herein.
A receiving module 820, configured to receive the target perspective rendered image and the associated perspective image that are sent by the server and match the tile index. In an embodiment, the receiving module 820 may be configured to perform the operation S520 described above, which is not described herein.
The first fusing module 830 is configured to fuse the target far-view rendered image and the near-view rendered image to obtain a target image. In an embodiment, the first fusion module 830 may be configured to perform the operation S530 described above, which is not described herein.
According to an embodiment of the disclosure, the rendering device further comprises a storage module.
And the storage module is used for storing the associated distant view image into the target storage space.
According to an embodiment of the present disclosure, the rendering apparatus 800 further includes a first generation module.
The first generation module is used for carrying out coordinate transformation on the target position and generating a tile index.
According to an embodiment of the present disclosure, the rendering apparatus 800 further includes a fifth determining module and a second generating module.
A fifth determination module for determining whether a target perspective rendered image exists from the target storage space based on the tile index in response to the instruction for rendering the target position.
And a second generation module for generating a request for requesting a perspective rendered image based on the tile index, in the case that it is determined that the target perspective rendered image does not exist in the target storage space.
According to an embodiment of the present disclosure, the rendering apparatus 800 further includes an acquisition module and a second fusion module.
The acquisition module is used for acquiring the target perspective rendering image from the target storage space under the condition that the target perspective rendering image exists in the target storage space.
And the second fusion module is used for fusing the target far-view rendering image and the near-view rendering image to obtain a target image.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
According to an embodiment of the present disclosure, an electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method as in an embodiment of the present disclosure.
According to an embodiment of the present disclosure, a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform a method as in an embodiment of the present disclosure.
According to an embodiment of the present disclosure, a computer program product comprising a computer program which, when executed by a processor, implements a method as an embodiment of the present disclosure.
Fig. 9 shows a schematic block diagram of an example electronic device 900 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the apparatus 900 includes a computing unit 901 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the device 900 can also be stored. The computing unit 901, the ROM 902, and the RAM 903 are connected to each other by a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
Various components in device 900 are connected to an input/output (I/O) interface 905, including: an input unit 906 such as a keyboard, a mouse, or the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, an optical disk, or the like; and a communication unit 909 such as a network card, modem, wireless communication transceiver, or the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 901 performs the respective methods and processes described above, such as a rendering method. For example, in some embodiments, the rendering method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 900 via the ROM 902 and/or the communication unit 909. When the computer program is loaded into the RAM 903 and executed by the computing unit 901, one or more steps of the rendering method described above may be performed. Alternatively, in other embodiments, the computing unit 901 may be configured to perform the rendering method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (19)

1. A rendering method, comprising:
determining a tile index from a request sent by a terminal device for requesting a perspective rendered image;
determining a target perspective rendered image and an associated perspective rendered image that match the tile index; and
and sending the target distant view rendering image and the associated distant view rendering image to the terminal equipment.
2. The method of claim 1, wherein the determining the target perspective rendered image and the associated perspective rendered image that match the tile index comprises:
Determining the target perspective rendered image that matches the tile index;
determining a cluster of the target far-reaching rendering image, wherein the scene of the cluster is the same as the scene of the target far-reaching rendering image; and
and determining the associated perspective rendered image based on the cluster.
3. The method of claim 2, wherein the request further comprises an indexing algorithm identification;
the determining the target perspective rendered image that matches the tile index comprises:
determining a target index algorithm based on the index algorithm identification;
determining a storage path of the target perspective rendered image based on the target index algorithm and the tile index; and
and determining the target perspective rendering image based on the storage path.
4. A method according to claim 2 or 3, wherein the determining the associated perspective rendered image based on the cluster comprises:
and taking the rendering image adjacent to the target distant view rendering image position in the cluster as the associated distant view rendering image.
5. The method of any of claims 2-4, further comprising:
Dividing the model to be rendered according to the dividing mode of the map tiles to obtain a plurality of sub-models to be rendered;
determining a perspective sub-model to be rendered from the sub-models to be rendered;
rendering the to-be-rendered long-range view sub-model, and identifying tile identifiers to obtain a long-range view rendering image; and
and determining the cluster based on a plurality of the perspective rendered images with the same scene.
6. The method of claim 5, further comprising:
determining a storage path of the cluster based on the tile identification of the perspective rendering image; and
and storing the cluster according to the storage path.
7. The method of any of claims 1-6, wherein the sending the target perspective rendered image and the associated perspective rendered image to the terminal device comprises:
associating the target perspective rendered image with a target tag for identifying a target image;
associating the associated perspective rendered image with an associated tag for identifying an associated image; and
and sending the target distant view rendering image associated with the target label and the associated distant view rendering image associated with the associated label to the terminal equipment.
8. The method of any of claims 1-7, wherein determining a tile index from the request comprises:
determining an initial tile index from the request; and
and taking the initial tile index as the tile index under the condition that the initial tile index meets the preset model division mode.
9. The method of claim 8, wherein the initial tile index comprises a tile hierarchy and tile coordinates;
the method further comprises the steps of:
determining a tile coordinate range matching the tile hierarchy; and
based on the tile coordinates and the tile coordinate range, it is determined whether the initial tile index satisfies the predetermined model partitioning manner.
10. A rendering method, comprising:
sending a request to a server for requesting a perspective rendered image, wherein the request includes a tile index;
receiving a target perspective rendered image and an associated perspective image which are sent by the server and matched with the tile index; and
and fusing the target far-view rendering image and the near-view rendering image to obtain a target image.
11. The method of claim 10, further comprising:
And storing the associated distant view image into a target storage space.
12. The method of claim 11, further comprising:
determining, based on the tile index, whether the target perspective rendered image is present from the target storage space in response to the instructions for rendering the target location;
in the event that it is determined that the target perspective rendered image is not present in the target storage space, generating the request for the perspective rendered image based on the tile index.
13. The method of claim 12, further comprising:
acquiring the target distant view rendering image from the target storage space under the condition that the target distant view rendering image exists in the target storage space; and
and fusing the target far-view rendering image and the near-view rendering image to obtain a target image.
14. The method of any of claims 10 to 13, further comprising:
and performing coordinate transformation on the target position to generate the tile index.
15. A rendering apparatus, comprising:
a first determining module, configured to determine a tile index from a request sent by a terminal device for requesting a perspective rendering of an image;
A second determining module, configured to determine a target perspective rendered image and an associated perspective rendered image that match the tile index; and
the first sending module is used for sending the target far-view rendering image and the associated far-view rendering image to the terminal equipment.
16. A rendering apparatus, comprising:
a second sending module, configured to send a request for requesting a perspective rendered image to a server, where the request includes a tile index;
the receiving module is used for receiving the target perspective rendering image and the associated perspective image which are sent by the server and matched with the tile index; and
the first fusion module is used for fusing the target far-view rendering image and the near-view rendering image to obtain a target image.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 14.
18. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1 to 14.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 14.
CN202311014120.2A 2023-08-11 2023-08-11 Rendering method, rendering device, electronic equipment and storage medium Pending CN117036574A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311014120.2A CN117036574A (en) 2023-08-11 2023-08-11 Rendering method, rendering device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311014120.2A CN117036574A (en) 2023-08-11 2023-08-11 Rendering method, rendering device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117036574A true CN117036574A (en) 2023-11-10

Family

ID=88625882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311014120.2A Pending CN117036574A (en) 2023-08-11 2023-08-11 Rendering method, rendering device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117036574A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611472A (en) * 2024-01-24 2024-02-27 四川物通科技有限公司 Fusion method for metaspace and cloud rendering

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611472A (en) * 2024-01-24 2024-02-27 四川物通科技有限公司 Fusion method for metaspace and cloud rendering
CN117611472B (en) * 2024-01-24 2024-04-09 四川物通科技有限公司 Fusion method for metaspace and cloud rendering

Similar Documents

Publication Publication Date Title
EP3965431A1 (en) Video data processing method and related device
WO2020098530A1 (en) Picture rendering method and apparatus, and storage medium and electronic apparatus
CN112784765B (en) Method, apparatus, device and storage medium for recognizing motion
CN113095336A (en) Method for training key point detection model and method for detecting key points of target object
US20220189189A1 (en) Method of training cycle generative networks model, and method of building character library
CN117036574A (en) Rendering method, rendering device, electronic equipment and storage medium
CN113642491A (en) Face fusion method, and training method and device of face fusion model
CN114792355B (en) Virtual image generation method and device, electronic equipment and storage medium
CN113077548A (en) Collision detection method, device, equipment and storage medium for object
EP4123595A2 (en) Method and apparatus of rectifying text image, training method and apparatus, electronic device, and medium
CN114612600A (en) Virtual image generation method and device, electronic equipment and storage medium
CN114998433A (en) Pose calculation method and device, storage medium and electronic equipment
CN112784102B (en) Video retrieval method and device and electronic equipment
CN113932796A (en) High-precision map lane line generation method and device and electronic equipment
CN112994980A (en) Time delay testing method and device, electronic equipment and storage medium
EP4152180A2 (en) Method of detecting action, electronic device, and storage medium
CN115687587A (en) Internet of things equipment and space object association matching method, device, equipment and medium based on position information
JP7422222B2 (en) Collision detection method, apparatus, electronic device, storage medium and computer program for object
CN111652831B (en) Object fusion method and device, computer-readable storage medium and electronic equipment
CN115187995A (en) Document correction method, device, electronic equipment and storage medium
CN113240780A (en) Method and device for generating animation
CN116524165B (en) Migration method, migration device, migration equipment and migration storage medium for three-dimensional expression model
CN111325267A (en) Data fusion method, device and computer readable storage medium
CN114820908B (en) Virtual image generation method and device, electronic equipment and storage medium
CN116363331B (en) Image generation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination