CN117409128A - Image processing method, image processing apparatus, electronic device, storage medium, and program product - Google Patents

Image processing method, image processing apparatus, electronic device, storage medium, and program product Download PDF

Info

Publication number
CN117409128A
CN117409128A CN202210805313.9A CN202210805313A CN117409128A CN 117409128 A CN117409128 A CN 117409128A CN 202210805313 A CN202210805313 A CN 202210805313A CN 117409128 A CN117409128 A CN 117409128A
Authority
CN
China
Prior art keywords
map
pixel
virtual scene
texture
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210805313.9A
Other languages
Chinese (zh)
Inventor
唐宏洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shanghai Co Ltd
Original Assignee
Tencent Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shanghai Co Ltd filed Critical Tencent Technology Shanghai Co Ltd
Priority to CN202210805313.9A priority Critical patent/CN117409128A/en
Publication of CN117409128A publication Critical patent/CN117409128A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The application provides an image processing method, an image processing device, electronic equipment, a computer readable storage medium and a computer program product of a virtual scene; the method comprises the following steps: acquiring an initial mapping of a topographic layer before virtual scene transformation, and acquiring a result mapping of the topographic layer after virtual scene transformation; inserting pixels of the result map on the basis of the initial map to obtain a composite map of the topographic map layer; determining texture offset points of each adjacent pixel pair according to the transformation progress of the virtual scene, and sampling the synthesized map of the topographic map layer according to each texture offset point to obtain texture sampling values of each adjacent pixel pair in the synthesized map; and performing rendering processing based on the texture sampling values of the composite map to obtain an image matched with the transformation progress of the virtual scene. By the method and the device, the mapping sampling efficiency in the scene change process can be improved.

Description

Image processing method, image processing apparatus, electronic device, storage medium, and program product
Technical Field
The present invention relates to an image processing technology of a virtual scene, and in particular, to an image processing method, an image processing device, an electronic device, a computer readable storage medium and a computer program product for a virtual scene.
Background
The display technology based on the graphic processing hardware expands the perception environment and the channel for acquiring information, particularly the multimedia technology of virtual scenes, can realize diversified interactions between virtual objects controlled by users or artificial intelligence according to actual application requirements by means of the man-machine interaction engine technology, has various typical application scenes, for example, in virtual scenes such as games and the like, and can simulate the actual fight process between the virtual objects.
Scene changes, such as season changes, often occur in a virtual scene, so that summer changes to winter changes, so that summer topography of the virtual scene changes to winter topography, and related techniques need to sample mapping of summer topography and mapping of winter topography during scene changes, resulting in lower sampling efficiency, thereby reducing processing efficiency of image resources.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment, a computer readable storage medium and a computer program product for a virtual scene, which can improve the mapping sampling efficiency in the scene transformation process.
The technical scheme of the embodiment of the application is realized as follows:
The embodiment of the application provides an image processing method of a virtual scene, which comprises the following steps:
acquiring an initial mapping of a topographic layer before the virtual scene is transformed, and acquiring a result mapping of the topographic layer after the virtual scene is transformed;
inserting pixels of the result map on the basis of the initial map to obtain a composite map of the topographic map layer, wherein two pixels at the same position in the initial map and the result map are adjacent in position in the composite map and form adjacent pixel pairs;
determining texture offset points of each adjacent pixel pair according to the transformation progress of the virtual scene, and sampling the synthesized map of the topographic map layer according to each texture offset point to obtain texture sampling values of the topographic map layer, wherein the texture sampling values are calculated based on a bilinear filtering mechanism;
and performing rendering processing based on the texture sampling values of the composite map to obtain an image matched with the transformation progress of the virtual scene.
An embodiment of the present application provides an image processing apparatus for a virtual scene, including:
the acquisition module is used for acquiring an initial mapping of the topographic layer before the virtual scene transformation and acquiring a result mapping of the topographic layer after the virtual scene transformation;
A synthesis module, configured to insert pixels of the result map on the basis of the initial map, to obtain a synthesis map of the topographic map layer, where two pixels located at the same position in the initial map and the result map are adjacent in position in the synthesis map and form an adjacent pixel pair;
the sampling module is used for determining texture offset points of each adjacent pixel pair according to the transformation progress of the virtual scene, and sampling the synthesized map of the topographic map layer according to each texture offset point to obtain texture sampling values of the topographic map layer, wherein the texture sampling values are calculated based on a bilinear filtering mechanism;
and the rendering module is used for executing rendering processing based on the texture sampling value of the composite map to obtain an image matched with the transformation progress of the virtual scene.
In the above solution, the obtaining module is further configured to: and acquiring an initial albedo mapping of the topographic layer before the virtual scene transformation, acquiring an initial normal mapping of the topographic layer before the virtual scene transformation, and taking the initial albedo mapping and the initial normal mapping as the initial mapping.
In the above solution, the obtaining module is further configured to: and obtaining a result albedo map of the topographic layer after the virtual scene is transformed, obtaining a result normal map of the topographic layer after the virtual scene is transformed, and taking the result albedo map and the result normal map as the result map.
In the above solution, the synthesis module is further configured to: the following processing is performed for a first pixel with an abscissa n and an ordinate m in the initial map: taking 2n as the abscissa of the first pixel in the composite map and m as the ordinate of the first pixel in the composite map; the following processing is performed for a second pixel with an abscissa n and an ordinate m in the result map: 2n+1 as the abscissa of the second pixel in the composite map and m as the ordinate of the second pixel in the composite map; the lengths of the initial mapping and the result mapping are N, the widths of the initial mapping and the result mapping are M, N and M are integers which are more than or equal to 2, the value range of N is more than or equal to 0 and less than or equal to N < N-1, and the value range of M is more than or equal to 0 and less than or equal to M < M-1; the composite map is generated based on the abscissa and the ordinate of each of the first pixels in the initial map in the composite map and the abscissa and the ordinate of each of the second pixels in the resulting map in the composite map.
In the above solution, the synthesis module is further configured to: the following processing is performed for each of the adjacent pixel pairs: acquiring a connecting line between a first center of the first pixel and a second center of the second pixel in the adjacent pixel pair, and determining a first distance from a midpoint of the connecting line to a left edge of the composite map; multiplying the first distance with the numerical value of the transformation progress, and normalizing the multiplication result to obtain a first corrected distance; acquiring an offset distance positively correlated to the first corrected distance; and taking a point on the connecting line, the distance from the first center of which is the offset distance, as the texture offset point.
In the above solution, the synthesis module is further configured to: the following processing is performed for a first pixel with an abscissa n and an ordinate m in the initial map: taking n as the abscissa of the first pixel in the composite map and 2m as the ordinate of the first pixel in the composite map; the following processing is performed for a second pixel with an abscissa n and an ordinate m in the result map: taking n as the abscissa of the second pixel in the composite map and 2m+1 as the ordinate of the second pixel in the composite map; the lengths of the initial mapping and the result mapping are N, the widths of the initial mapping and the result mapping are M, N and M are integers which are more than or equal to 2, the value range of N is more than or equal to 0 and less than or equal to N < N-1, and the value range of M is more than or equal to 0 and less than or equal to M < M-1; the composite map is generated based on the abscissa and the ordinate of each of the first pixels in the initial map in the composite map and the abscissa and the ordinate of each of the second pixels in the resulting map in the composite map.
In the above solution, the synthesis module is further configured to: the following processing is performed for each of the adjacent pixel pairs: acquiring a connecting line between a first center of the first pixel and a second center of the second pixel in the adjacent pixel pair, and determining a second distance from a midpoint of the connecting line to an upper edge of the composite map; multiplying the second distance with the value of the transformation progress, and normalizing the multiplied result to obtain a second correction distance; acquiring an offset distance positively correlated to the second corrected distance; and taking a point on the connecting line, the distance from the first center of which is the offset distance, as the texture offset point.
In the above solution, the sampling module is further configured to: acquiring a connection line between a first center of a first pixel and a second center of a second pixel in the adjacent pixel pair, wherein the first pixel is a pixel of the initial map and the second pixel is a pixel of the result map; acquiring an offset distance positively correlated to the transformation progress; and determining a point on the connecting line, which is the offset distance from the first center, as a texture offset point of the adjacent pixel pair.
In the above solution, the sampling module is further configured to: before determining texture offset points of each adjacent pixel pair according to the transformation progress of the virtual scene, acquiring complete transformation time consumption of the virtual scene, and acquiring time length between real-time and starting time of scene transformation; and obtaining the ratio between the time length and the complete time consumption, and determining the ratio as the transformation progress.
In the above solution, the sampling module is further configured to: the following is performed for each of the synthetic maps of the terrain map layer: and sampling the synthesized map according to each texture offset point to obtain a color value of each texture offset point in the synthesized map, and taking the color value of each texture offset point as a texture sampling value of the synthesized map.
In the above solution, the sampling module is further configured to: sampling the synthesized map according to each texture offset point, and executing the following processing for each texture offset point before obtaining the color value of each texture offset point in the synthesized map: acquiring a first color value of a first pixel and a second color value of a second pixel in the synthesis map, wherein the first pixel and the second pixel are derived from adjacent pixel pairs corresponding to the texture offset point; acquiring a third distance between the texture offset point and the first pixel and a fourth distance between the texture offset point and the second pixel; acquiring a first weight inversely related to the third distance and a second weight inversely related to the fourth distance; acquiring a first multiplication result of the first weight and the first color value and a second multiplication result of the second weight and the second color value; and adding the first multiplication result and the second multiplication result to be used as a color value of the texture offset point.
In the above solution, the synthesis module is further configured to: inserting pixels of the result albedo map on the basis of the initial albedo map to obtain a composite albedo map of the topographic map layer; inserting pixels of the result normal map on the basis of the initial normal map to obtain a synthesized normal map of the topographic map layer; and taking the synthesized albedo map and the synthesized normal map as synthesized maps of the topographic map layer.
In the above solution, the rendering module is further configured to: when the number of the topographic layers is plural, the following processing is performed for each of the topographic layers: performing rendering processing on the texture sampling value of each adjacent pixel pair in the synthesized map of the topographic map layer to obtain a first image matched with the transformation progress of the virtual scene; and carrying out fusion processing on the plurality of first images corresponding to the topographic map layers one by one to obtain images matched with the transformation progress of the virtual scene.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the image processing method of the virtual scene provided by the embodiment of the application when executing the executable instructions stored in the memory.
The embodiment of the application provides a computer readable storage medium, which stores executable instructions for realizing the image processing method of the virtual scene provided by the embodiment of the application when being executed by a processor.
The embodiment of the application provides a computer program product, which comprises a computer program or instructions, wherein the computer program or instructions are executed by a processor to realize the image processing method of the virtual scene.
The embodiment of the application has the following beneficial effects:
inserting pixels of the result map on the basis of the initial map to obtain a composite map of the topographic map layer, wherein two pixels at the same position in the initial map and the result map are adjacent in position in the composite map and form adjacent pixel pairs; and (3) sampling the synthetic map according to the texture offset points of each adjacent pixel pair to obtain texture sampling values of the topographic map layer, wherein the texture sampling values are obtained by calculation according to a bilinear filtering mechanism, and the texture offset points are obtained according to the scene transformation progress, so that the texture sampling values of the texture offset points can accurately represent the weight mixing result of the adjacent pixel pairs and the scene transformation progress adaptation, which is equivalent to replacing two samples by a single sample, and the map sampling efficiency in the scene transformation process is improved.
Drawings
FIG. 1 is a schematic diagram of a mapping transformation of a scene change in the related art;
fig. 2 is a schematic structural diagram of an image processing system of a virtual scene according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4A to fig. 4C are schematic flow diagrams of an image processing method of a virtual scene according to an embodiment of the present application;
fig. 5A to 5C are schematic interface diagrams of an image processing method of a virtual scene according to an embodiment of the present application;
fig. 6 is a schematic pixel arrangement diagram of an image processing method of a virtual scene according to an embodiment of the present application;
FIG. 7 is a schematic diagram of pixel arrangement of a composite map provided in an embodiment of the present application;
fig. 8 is an interface schematic diagram of an image processing method of a virtual scene provided in an embodiment of the present application;
fig. 9 is an interface schematic diagram of an image processing method of a virtual scene provided in an embodiment of the present application;
fig. 10 is a flowchart of an image processing method of a virtual scene provided in an embodiment of the present application;
fig. 11 is a schematic pixel arrangement diagram of an image processing method of a virtual scene according to an embodiment of the present application;
FIG. 12 is a schematic diagram of pixel arrangements of a composite map provided in an embodiment of the present application;
Fig. 13 is an interface schematic diagram of an image processing method of a virtual scene provided in an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a specific ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a specific order or sequence, as permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
Before further describing embodiments of the present application in detail, the terms and expressions that are referred to in the embodiments of the present application are described, and are suitable for the following explanation.
1) Bilinear filtering, which refers to a texture filtering mode that performs texture smoothing when zooming and displaying, is generally difficult to be completely identical to stored textures when displaying on a screen, without any loss. Pixels may be represented using points between texels, which may be assumed to be points that are all centered or upper left or other locations in each cell. The bilinear filter performs bilinear interpolation using four nearest texels around the pixel.
2) A Terrain map layer (Terrain layer) representing a single Terrain, comprising an Albedo map and a normal map. A Terrain is formed by a plurality of Terrain layers. For example, the grassland is one Terrain layer, the sand is another Terrain layer, and the grassland and the sand are mixed in different areas by a certain proportion, so that a Terrain with both grassland and sand is obtained.
3) Virtual scenes, namely, a scene which is output by equipment and is different from the real world, can form visual perception of the virtual scenes through naked eyes or the assistance of equipment, for example, a two-dimensional image output by a display screen, and a three-dimensional image output by three-dimensional display technologies such as three-dimensional projection, virtual reality and augmented reality technologies; in addition, various simulated real world sensations such as auditory sensations, tactile sensations, olfactory sensations, and motion sensations can also be formed by various possible hardware.
4) And a client, an application program for providing various services, such as a game client, etc., running in the terminal.
5) Cloud storage is a new concept which extends and develops in the concept of cloud computing, and refers to a system which integrates a large number of storage devices of different types in a network through application software to cooperatively work and jointly provides data storage and service access functions for the outside through functions of cluster application, grid technology or a distributed file system and the like. When the core of the operation and the processing of the cloud computing system is the storage and the management of a large amount of data, a large amount of storage devices need to be configured in the cloud computing system, and then the cloud computing system is converted into a cloud storage system, so that the cloud storage is a cloud computing system with the data storage and the management as the core.
Zxfoom zxfoom , in virtual scenes is used for the season change of (1), the weights of the two maps are then calculated according to the seasonal transformation schedule, the two maps are weighted averaged. Taking the example of a gradual change of the topography a from summer to winter, the albedo map of one Terrain layer in the topography a needs to be gradually changed from the grassland map (initial map) shown in fig. 1 to the snowfield map (result map). Zxfoom , zxfoom , firstly, mapping grasslands the sampling process is carried out and the sample is taken, the snow map is then sampled and processed, the color value Cw is obtained. Setting the duration of the season change as T, and the time difference between the real-time and the season change starting time as T, the progress p of the season change isThe final color value C can be calculated with reference to formula (1):
C=p×Cs+(1-p)×Cw (1);
where C is the final color value, p is the progress of the seasonal change, cs is the color value obtained by sampling the grass map, and Cw is the color value obtained by sampling the snow map.
The Albedo maps of all Terrain layers for Terrain a may be mixed in the manner described above, and the normal maps of all Terrain layers for Terrain a may be mixed in the manner described above, so as to obtain the final color values of the Albedo maps of the respective Terrain layers and the final color values of the normal maps of the respective Terrain layers.
The related art has a disadvantage in that the number of times of mapping sampling is large, that is, only the grassland mapping needs to be sampled if the seasonal transformation is not performed, and the grassland mapping is rendered based on the sampling result to form the summer grassland effect of the terrain a, and the grassland mapping and the snowfield mapping need to be sampled if the seasonal transformation is required, so that the number of the Albedo mapping and the normal mapping need to be sampled is doubled.
The embodiment of the application provides an image processing method, an image processing device, electronic equipment, a computer readable storage medium and a computer program product for a virtual scene, which can be used for returning a texture sampling value calculated based on a bilinear filtering mechanism by sampling texture offset points of a synthetic map, and improving the map sampling efficiency in the scene transformation process while accurately representing a weight mixing result of adjacent pixel pairs and scene transformation progress adaptation. The following describes exemplary applications of the electronic device provided in the embodiments of the present application, where the electronic device provided in the embodiments of the present application may be implemented as various types of user terminals such as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device), and so on.
In order to facilitate easier understanding of the image processing method for a virtual scene provided by the embodiment of the present application, first, an exemplary implementation scenario of the image processing method for a virtual scene provided by the embodiment of the present application is described, where the virtual scene may be output based on a terminal entirely or based on cooperation of the terminal and a server.
In some embodiments, the virtual scene may be an environment for interaction of game characters, for example, the game characters may fight in the virtual scene, and both parties may interact in the virtual scene by controlling actions of the virtual objects, so that the user can relax life pressure in the game process.
In another implementation scenario, referring to fig. 2, fig. 2 is a schematic application mode diagram of an image processing method of a virtual scenario provided in an embodiment of the present application, which is applied to a terminal 400 and a server 200, and is generally applicable to an application mode that depends on a computing capability of the server 200 to complete a virtual scenario calculation and output a virtual scenario at the terminal 400.
As an example, before the terminal runs the client (in an offline state), the server 200 acquires an initial map of the topographic layer before the virtual scene change, and acquires a resulting map of the topographic layer after the virtual scene change; the server 200 inserts pixels of the result map on the basis of the initial map to obtain a composite map of the topographic map layer, two pixels which are positioned at the same position in the initial map and the result map are adjacent in position in the composite map and form adjacent pixel pairs, after an account logs in a client (such as a game application of a network edition) operated by the terminal 400, the client is in an operating state (on-line state), the client receives a scene transformation request and sends the scene transformation request to the server 200, texture offset points of each adjacent pixel pair are determined according to the transformation progress of the virtual scene, sampling processing is carried out on the composite map of the topographic map layer according to each texture offset point to obtain texture sampling values of the topographic map layer, and the texture sampling values are calculated based on a bilinear filtering mechanism; rendering processing is performed based on texture sample values of the composite map, an image matching the transformation progress of the virtual scene is obtained, and the server 200 returns the image to the terminal 400 for presentation.
As an example, before the terminal runs the client (in an offline state), the terminal 400 acquires an initial map of the topographic layer before the virtual scene transformation, and acquires a resulting map of the topographic layer after the virtual scene transformation; the terminal 400 inserts pixels of the result map on the basis of the initial map to obtain a composite map of the topographic map layer, two pixels which are positioned at the same position in the initial map and the result map are adjacent in the position in the composite map and form adjacent pixel pairs, after an account number logs in a client (such as a game application of a network edition) operated by the terminal 400, the client is in an operating state (on-line state), a texture offset point of each adjacent pixel pair is determined according to the transformation progress of the virtual scene, sampling processing is carried out on the composite map of the topographic map layer according to each texture offset point, so as to obtain texture sampling values of the topographic map layer, and the texture sampling values are calculated based on a bilinear filtering mechanism; rendering is performed based on texture sample values of the composite map, resulting in an image matching the transformation progress of the virtual scene, which is presented on the terminal 400.
In some embodiments, the terminal 400 may implement the image processing method of the virtual scene provided in the embodiments of the present application by running a computer program, for example, the computer program may be a native program or a software module in an operating system; it may be a local (Native) Application (APP), i.e. a program that needs to be installed in an operating system to run, such as a game APP (i.e. the client described above), a live APP; the method can also be an applet, namely a program which can be run only by being downloaded into a browser environment; but also a game applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
The embodiment of the application can be realized by means of Cloud Technology (Cloud Technology), wherein the Cloud Technology refers to a hosting Technology for integrating serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.
The cloud technology is a generic term of network technology, information technology, integration technology, management platform technology, application technology and the like based on cloud computing business model application, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical network systems require a large amount of computing and storage resources.
As an example, the server 200 may be a stand-alone physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The terminal 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present application.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device of the image processing method for applying a virtual scene according to the embodiment of the present application, and the electronic device is taken as an example to describe the electronic device, and the terminal 400 shown in fig. 3 includes: at least one processor 410, a memory 450, at least one network interface 420, and a user interface 430. The various components in terminal 400 are coupled together by a bus system 440. It is understood that the bus system 440 is used to enable connected communication between these components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled in fig. 3 as bus system 440.
The processor 410 may be an integrated circuit chip having signal processing capabilities such as a general purpose processor, such as a microprocessor or any conventional processor, or the like, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable presentation of the media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
Memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 450 optionally includes one or more storage devices physically remote from processor 410.
Memory 450 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a random access Memory (RAM, random Access Memory). The memory 450 described in the embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 451 including system programs, e.g., framework layer, core library layer, driver layer, etc., for handling various basic system services and performing hardware-related tasks, for implementing various basic services and handling hardware-based tasks;
network communication module 452 for reaching other computing devices via one or more (wired or wireless) network interfaces 420, exemplary network interfaces 420 include: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.;
A presentation module 453 for enabling presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more output devices 431 (e.g., a display screen, speakers, etc.) associated with the user interface 430;
an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the image processing apparatus for a virtual scene provided in the embodiments of the present application may be implemented in a software manner, and fig. 3 shows an image processing apparatus 455 for a virtual scene stored in a memory 450, which may be software in the form of a program and a plug-in, and includes the following software modules: the acquisition module 4551, the synthesis module 4552, the sampling module 4553 and the rendering module 4554 are logical, and thus may be arbitrarily combined or further split according to the functions implemented, the functions of each module will be described below.
The image processing method of the virtual scene provided by the embodiment of the application will be described with reference to an exemplary application and implementation of the terminal provided by the embodiment of the application.
Referring to fig. 4A, fig. 4A is a flowchart of an image processing method of a virtual scene according to an embodiment of the present application, and will be described with reference to steps 101 to 104 shown in fig. 4A.
In step 101, an initial map of the terrain layer before the virtual scene transformation is obtained, and a resulting map of the terrain layer after the virtual scene transformation is obtained.
In some embodiments, the obtaining of the initial map of the topographic map layer before the virtual scene transformation in step 101 may be implemented by the following technical scheme: and acquiring an initial albedo map of the topographic map layer before the virtual scene transformation, and acquiring an initial normal map of the topographic map layer before the virtual scene transformation, wherein the initial albedo map and the initial normal map are used as initial maps.
In some embodiments, the obtaining of the result map of the topographic map layer after the virtual scene transformation in step 101 may be implemented by the following technical scheme: and obtaining the result albedo mapping of the topographic layer after the virtual scene transformation, obtaining the result normal mapping of the topographic layer after the virtual scene transformation, and taking the result albedo mapping and the result normal mapping as the result mapping.
The texture authenticity of the land parcels in the virtual scene can be ensured simultaneously by acquiring the albedo map and the normal map, so that more real land parcels can be rendered.
As an example, the initial map is a map before the virtual scene change, and the resulting map is a map after the virtual scene change. The virtual scene change may be a summer to winter change, i.e., the initial mapping is a summer mapping and the resulting mapping is a winter mapping, or the virtual scene change may be a spring to autumn change, etc., i.e., the initial mapping is a spring mapping and the resulting mapping is an autumn mapping. The mapping may include two types, either initial mapping or winter mapping, namely normal mapping and albedo mapping.
As an example, a normal map is a normal line at each point of the concave-convex surface of the original object, the direction of the normal line is marked by color channels of red, green, and blue, and the normal map is another different surface parallel to the original concave-convex surface, which is actually just a smooth plane. For visual effect, the efficiency is higher than that of the original concave-convex surface, and if the light source is applied to a specific position, the surface with lower detail degree can generate accurate illumination direction and reflection effect with high detail degree. The albedo map is used to represent the texture and color of the model. The albedo map itself is a color and texture map.
In step 102, pixels of the resulting map are inserted on the basis of the initial map to obtain a composite map of the topographic map layer.
As an example, two pixels at the same position in the initial map and the result map are positioned adjacent in the composite map and constitute adjacent pixel pairs.
In some embodiments, in step 102, the pixels of the result map are inserted on the basis of the initial map, so as to obtain a composite map of the topographic map layer, which may be implemented by the following technical scheme: inserting pixels of the result albedo map on the basis of the initial albedo map to obtain a composite albedo map of the topographic map layer; inserting pixels of the result normal map on the basis of the initial normal map to obtain a synthesized normal map of the topographic map layer; and taking the synthesized albedo map and the synthesized normal map as the synthesized map of the topographic map layer.
As an example, since the map includes two types (normal map and albedo map), the composite map also includes two types (composite normal map and composite albedo map). The composite albedo map is derived based on the initial albedo map and the resulting albedo map, and the composite normal map is derived based on the initial normal map and the resulting normal map.
In some embodiments, referring to fig. 4B, fig. 4B is a flowchart of an image processing method of a virtual scene provided in the embodiment of the present application, in step 102, pixels of a result map are inserted on the basis of an initial map, and a composite map of a topographic map layer may be obtained through steps 1021 to 1023 in fig. 4B.
In step 1021, the following is performed for the first pixel of the initial map having an abscissa n and an ordinate m: let 2n be the abscissa of the first pixel in the composite map and m be the ordinate of the first pixel in the composite map.
In step 1022, the following is performed for the second pixel with the abscissa n and the ordinate m in the result map: 2n+1 is taken as the abscissa of the second pixel in the composite map and m is taken as the ordinate of the second pixel in the composite map.
As an example, the initial map and the result map are both N in length, the initial map and the result map are both M in width, N and M are integers greater than or equal to 2, N is greater than or equal to 0 and less than or equal to N < N-1, and M is greater than or equal to 0 and less than or equal to M < M-1.
In step 1023, a composite map is generated based on the abscissa and ordinate of each first pixel in the initial map in the composite map and the abscissa and ordinate of each second pixel in the resulting map in the composite map.
The difficulty and the sampling times of sampling can be reduced through the insertion mode of left and right adjacent arrangement, and only one sampling is needed for adjacent pixel pairs, so that the scheme of carrying out two sampling for the initial mapping and the result mapping is replaced, and the sampling efficiency is improved.
As an example, when the initial map is an initial albedo map and the resulting map is also a resulting albedo map, the composite map is a composite albedo map, and when the initial map is an initial normal map and the resulting map is also a resulting normal map, the composite map is a composite normal map.
As an example, the size of the initial map and the resulting map are both a×a, a is a positive integer, a (u, v) is the pixel of the initial map at the coordinates (u, v), and B (u, v) is the pixel of the resulting map at the coordinates (u, v). The length of the composite map C is twice the length of the initial map and the width is the same as the initial map, and the size of the composite map C is 2a x a. Referring to fig. 6, fig. 6 is a schematic pixel arrangement diagram of an image processing method for a virtual scene according to an embodiment of the present application, where a pixel a and a pixel B are arranged according to an arrangement manner shown in fig. 6, where the pixel a is arranged at a position with coordinates (2 u, v) in a composite map C, and the pixel B is arranged at a position with coordinates (2u+1, v) in the composite map C, that is, the pixel a and the pixel B are arranged as two adjacent pixels in the composite map. Referring to fig. 7, fig. 7 is a schematic diagram of pixel arrangement of a composite map provided in an embodiment of the present application, and an arrangement of each pixel in a composite map C is shown in fig. 7. Referring to fig. 8, fig. 8 is an interface schematic diagram of an image processing method of a virtual scene of the virtual scene provided in the embodiment of the present application, and two actual original stickers (with a single size of 512×512) are obtained after offline synthesis processing to obtain a synthesized stickers (with a single size of 1024×512). Each topographical layer of the terrain needs to be processed once in the manner described above, and for 10 topographical layers, 10 composite maps are obtained in a one-to-one correspondence.
In some embodiments, in step 102, the pixels of the result map are inserted on the basis of the initial map, so as to obtain a composite map of the topographic map layer, which may be implemented by the following technical scheme: the following is performed for the first pixel of the initial map with an abscissa n and an ordinate m: taking n as the abscissa of the first pixel in the composite map and 2m as the ordinate of the first pixel in the composite map; the following is performed for the second pixel with the abscissa n and the ordinate m in the result map: taking n as the abscissa of the second pixel in the composite map and 2m+1 as the ordinate of the second pixel in the composite map; the lengths of the initial mapping and the result mapping are N, the widths of the initial mapping and the result mapping are M, N and M are integers which are more than or equal to 2, the value range of N is more than or equal to 0 and less than or equal to N-1, and the value range of M is more than or equal to 0 and less than or equal to M-1; the composite map is generated based on the abscissa and the ordinate of each first pixel in the initial map in the composite map and the abscissa and the ordinate of each second pixel in the resulting map in the composite map.
The difficulty and the sampling times of sampling can be reduced through the inserting mode of vertically adjacent arrangement, and only one sampling is needed for adjacent pixel pairs, so that the scheme of twice sampling for the initial mapping and the result mapping is replaced, and the sampling efficiency is improved.
As an example, the size of the initial map and the resulting map are both a×a, a is a positive integer, a (u, v) is the pixel of the initial map at the coordinates (u, v), and B (u, v) is the pixel of the resulting map at the coordinates (u, v). The composite map C is twice as wide as the initial map and the same length as the initial map, and has dimensions a x 2a. Referring to fig. 11, fig. 11 is a schematic pixel arrangement diagram of an image processing method for a virtual scene according to an embodiment of the present application, where a pixel a and a pixel B are arranged according to an arrangement manner shown in fig. 11, where the pixel a is arranged at a position with coordinates (u, 2 v) in a composite map C, and the pixel B is arranged at a position with coordinates (u, 2v+1) in the composite map C, that is, the pixel a and the pixel B are arranged as two adjacent pixels in the composite map. Referring to fig. 12, fig. 12 is a schematic diagram of pixel arrangement of a composite map provided in an embodiment of the present application, and an arrangement of each pixel in a composite map C is shown in fig. 12. Referring to fig. 13, fig. 13 is an interface schematic diagram of an image processing method of a virtual scene of the virtual scene provided in the embodiment of the present application, where two actual original maps (512×512 single sizes) are subjected to offline synthesis processing to obtain a synthesized map (512×1024 single sizes).
In step 103, texture offset points of each adjacent pixel pair are determined according to the transformation progress of the virtual scene, and the synthesized map of the topographic map layer is sampled according to each texture offset point, so as to obtain texture sampling values of the topographic map layer.
In some embodiments, before determining the texture offset point of each adjacent pixel pair according to the transformation progress of the virtual scene in step 103, obtaining the complete transformation time consumption of the virtual scene, and obtaining the duration between the real-time moment and the starting moment of the scene transformation; and obtaining the ratio between the time length and the complete time consumption, and determining the ratio as a transformation progress.
The transformation progress of the virtual scene can be measured from the time dimension by the ratio of the time length to the complete time consumption, so that the transformation progress has stronger interpretability.
As an example, the complete transformation time of the virtual scene is preconfigured, e.g. the complete transformation time from summer to winter is 3 seconds. The starting time of the scene change is 14 minutes and 25 seconds at 2 pm, the real-time is 14 minutes and 26 seconds at 2 pm, and the duration is 1 second, and then the ratio of the duration to the complete time consumption is 1/3, namely the value of the change progress is 1/3.
In some embodiments, referring to fig. 4C, fig. 4C is a flowchart illustrating an image processing method of a virtual scene provided in the embodiments of the present application, and determining, in step 103, a texture offset point of each adjacent pixel pair according to a transformation progress of the virtual scene may be implemented by performing steps 1031 to 1032 in fig. 4C for each adjacent pixel pair.
In step 1031, a connection between a first center of a first pixel and a second center of a second pixel in a pair of adjacent pixels is obtained.
In step 1032, an offset distance is obtained that is positively correlated to the transformation schedule, and a point on the link that is offset from the first center is determined as a texture offset point for the adjacent pixel pair.
As an example, the first pixel is a pixel of the initial map and the second pixel is a pixel of the resulting map. Referring to fig. 9, fig. 9 is an interface schematic diagram of an image processing method of a virtual scene of the virtual scene provided in the embodiment of the present application, two end points of a dotted line are centers of a first pixel a and a second pixel B, a point M is a sampling position used as a texture offset point, the texture offset point can be described by a distance x from the first pixel a, and a calculation formula of the distance x isT is the real time instant, and T is the complete transformation time consumption of the virtual scene.
The texture offset point is determined through the offset distance positively related to the transformation progress, so that the sampling position of each real-time moment can be effectively controlled, the degree that the texture sampling value of each real-time moment is biased to the result mapping and the degree that the texture sampling value is far away from the initial mapping are accurately controlled, and the accuracy and the fidelity of the rendering effect are improved.
In some embodiments, the texture sample value in step 103 is calculated in advance based on a bilinear filtering mechanism. In step 103, the texture sampling value of each adjacent pixel pair in the composite map is obtained by sampling the composite map of the topographic map according to each texture offset point, which can be achieved by the following technical scheme: the following is performed for each composite map of the terrain map layer: and sampling the synthetic map according to each texture offset point to obtain a color value of each texture offset point in the synthetic map, and taking the color value of each texture offset point as a texture sampling value of the synthetic map.
As an example, the topography layer comprises two synthesis maps, i.e. the topography layer comprises a synthesis normal map and a synthesis albedo map, for each of which a sampling process is performed. Taking the synthetic albedo map as an example, when the size of the initial albedo map is 512×512, and the size of the resultant albedo map is 512×512, and the size of the synthetic albedo map is 1024×512, the synthetic albedo map includes 512 adjacent pixel pairs, so that 512 texture sampling values can be returned when sampling the synthetic albedo map, where the texture sampling values are color values of texture offset points, and each adjacent pixel pair corresponds to one texture offset point.
In some embodiments, the sampling process is performed on the composite map according to each texture offset point, and before obtaining the color value of each texture offset point in the composite map, the following process is performed for each texture offset point: acquiring a first color value of a first pixel and a second color value of a second pixel in the composite map, wherein the first pixel and the second pixel are derived from adjacent pixel pairs corresponding to texture offset points; acquiring a third distance between the texture offset point and the first pixel and a fourth distance between the texture offset point and the second pixel; acquiring a first weight inversely related to the third distance and a second weight inversely related to the fourth distance; acquiring a first multiplication result of the first weight and the first color value and a second multiplication result of the second weight and the second color value; and adding the first multiplication result and the second multiplication result to be a color value of the texture offset point.
The first weight and the second weight are determined through the third distance and the fourth distance, the first color value and the second color value are fused through the first weight and the second weight, the interpolation equivalent to the bilinear filtering mechanism is realized, and the color value of the texture offset point is accurately represented through the interpolation.
As an example, referring to fig. 9, fig. 9 is an interface schematic diagram of an image processing method of a virtual scene of the virtual scene provided in the embodiment of the present application, two end points of a dotted line are center points of a first pixel a and a second pixel B, an M point is a sampling position that is a texture offset point, texture sampling is performed at the M point, a color value of the M point can be obtained, and the color value of the M point is a result obtained by mixing the first pixel a and the second pixel B with WA and WB as weights. Acquiring a first color value of a first pixel A and a second color value of a second pixel B in the composite map, and acquiring a third distance x between a texture offset point (M point) and the first pixel A and a fourth distance y between the texture offset point M and the second pixel B; acquiring a first weight (for example, 1-x) inversely related to a third distance and a second weight inversely related to a fourth distance (1-y), and acquiring a first multiplication result of the first weight and a first color value and a second multiplication result of the second weight and a second color value; the addition of the first multiplication result and the second multiplication result is used as a color value of the texture offset point, which is equivalent to weighted addition processing of the first color value and the second color value, wherein the closer the texture offset point (M point) is to the first pixel A (the smaller the third distance is), the larger the weight of the first color value is, and the closer the texture offset point (M point) is to the second pixel B (the smaller the fourth distance is), the larger the weight of the second color value is.
In some embodiments, the determining the texture offset point of each adjacent pixel pair according to the transformation progress of the virtual scene in step 103 may be implemented by the following technical scheme: the following processing is performed for each adjacent pixel pair: acquiring a connecting line between a first center of a first pixel and a second center of a second pixel in adjacent pixel pairs, and determining a first distance from the midpoint of the connecting line to the left edge of the composite map; multiplying the first distance by the value of the transformation progress, and normalizing the multiplied result to obtain a first corrected distance; acquiring an offset distance positively correlated to the first correction distance; and taking a point on the connecting line, which is an offset distance from the first center, as a texture offset point.
The texture offset point is determined through the offset distance positively related to the transformation progress, so that the sampling position of each real-time moment can be effectively controlled, the degree that the texture sampling value of each position of each real-time moment is biased to the result mapping and the degree that the texture sampling value is far away from the initial mapping are accurately controlled, and the accuracy and the fidelity of the rendering effect are improved.
As an example, the size of the initial map and the resulting map are both a×a, a is a positive integer, a (u, v) is the pixel of the initial map at the coordinates (u, v), and B (u, v) is the pixel of the resulting map at the coordinates (u, v). The length of the composite map C is twice the length of the initial map and the width is the same as the initial map, and the size of the composite map C is 2a x a. Referring to fig. 6, fig. 6 is a schematic pixel arrangement diagram of an image processing method for a virtual scene according to an embodiment of the present application, where a pixel a and a pixel B are arranged according to an arrangement manner shown in fig. 6, where the pixel a is arranged at a position with coordinates (2 u, v) in a composite map C, and the pixel B is arranged at a position with coordinates (2u+1, v) in the composite map C, that is, the pixel a and the pixel B are arranged as two adjacent pixels in the composite map. Referring to fig. 7, fig. 7 is a schematic diagram of pixel arrangement of a composite map provided in an embodiment of the present application, and an arrangement of each pixel in a composite map C is shown in fig. 7.
Since different pixel pairs are located at different positions in the composite map, different texture offset points can be determined for different pixel pairs, when the texture offset points are closer to the first pixel, the characterization sampling values are closer to the initial map, when the texture offset points are closer to the second pixel, the characterization sampling values are closer to the result map, different scene transformation effects can be represented for the texture offset points of adjacent pixel pairs at different positions, for example, for the same transformation progress (i.e. the same real-time moment), the adjacent pixel pair a is closer to the left edge of the composite map than the adjacent pixel pair B (the first distance of the corresponding adjacent pixel pair a is smaller than the first distance of the corresponding adjacent pixel pair B), at this time, the distance between the texture sampling points of the adjacent pixel pair a and the first pixel is smaller than the distance between the texture sampling points of the adjacent pixel pair B, so that at any moment, the texture sampling points of the adjacent pixel pair a are more biased to the initial map than the texture sampling points of the adjacent pixel pair B, and thus an uneven transformation effect of a block occurs in the scene transformation process, and the transformation effect of the adjacent pixel pair a is closer to the left-right part than the winter part of the transformation effect of the adjacent pixel pair B is closer to the left part than the left part of the transformation effect of the adjacent pixel pair B.
In some embodiments, the determining the texture offset point of each adjacent pixel pair according to the transformation progress of the virtual scene in step 103 may be implemented by the following technical scheme: the following processing is performed for each adjacent pixel pair: acquiring a connecting line between a first center of a first pixel and a second center of a second pixel in adjacent pixel pairs, and determining a second distance from the middle point of the connecting line to the upper edge of the composite map; multiplying the second distance by the value of the transformation progress, and normalizing the multiplied result to obtain a second corrected distance; acquiring an offset distance positively correlated to the second correction distance; and taking a point on the connecting line, which is an offset distance from the first center, as a texture offset point.
The texture offset point is determined through the offset distance positively related to the transformation progress, so that the sampling position of each real-time moment can be effectively controlled, the degree that the texture sampling value of each position of each real-time moment is biased to the result mapping and the degree that the texture sampling value is far away from the initial mapping are accurately controlled, and the accuracy and the fidelity of the rendering effect are improved.
As an example, the size of the initial map and the resulting map are both a×a, a is a positive integer, a (u, v) is the pixel of the initial map at the coordinates (u, v), and B (u, v) is the pixel of the resulting map at the coordinates (u, v). The composite map C is twice as wide as the initial map and the same length as the initial map, and has dimensions a x 2a. Referring to fig. 11, fig. 11 is a schematic pixel arrangement diagram of an image processing method for a virtual scene according to an embodiment of the present application, where a pixel a and a pixel B are arranged according to an arrangement manner shown in fig. 11, where the pixel a is arranged at a position with coordinates (u, 2 v) in a composite map C, and the pixel B is arranged at a position with coordinates (u, 2v+1) in the composite map C, that is, the pixel a and the pixel B are arranged as two adjacent pixels in the composite map. Referring to fig. 12, fig. 12 is a schematic diagram of pixel arrangement of a composite map provided in an embodiment of the present application, and an arrangement of each pixel in a composite map C is shown in fig. 12.
Since different pixel pairs are located at different positions in the composite map, different texture offset points can be determined for different pixel pairs, when the texture offset points are closer to the first pixel, the characterization sampling values are closer to the initial map, when the texture offset points are closer to the second pixel, the characterization sampling values are closer to the result map, different scene transformation effects can be represented for the texture offset points of adjacent pixel pairs at different positions, for example, for the same transformation schedule (i.e. same real-time), the adjacent pixel pair a is closer to the upper edge of the composite map than the adjacent pixel pair B (the first distance of the corresponding adjacent pixel pair a is smaller than the first distance of the corresponding adjacent pixel pair B), at this time, the distance of the texture sampling points of the adjacent pixel pair a is smaller than the distance of the texture sampling points of the adjacent pixel pair B, so that at any time, the texture sampling points of the adjacent pixel pair a are more biased to the initial map than the texture sampling points of the adjacent pixel pair B, and thus an uneven transformation effect of a block occurs in the scene transformation process, and the down-conversion effect of the down-conversion block occurs in the winter down-conversion process is more nearly to the down-conversion effect of the down-conversion block in the winter down-conversion portion than the down-conversion portion.
In step 104, a rendering process is performed based on texture sample values of the composite map, resulting in an image matching the transformation progress of the virtual scene.
In some embodiments, the rendering process performed in step 104 based on the texture sampling value of the composite map, to obtain an image matching with the transformation progress of the virtual scene may be implemented by the following technical scheme: when the number of topographical layers is plural, the following processing is performed for each topographical layer: performing rendering processing on texture sampling values of each adjacent pixel pair in the synthesized map of the topographic map layer to obtain a first image matched with the transformation progress of the virtual scene; and carrying out fusion processing on a plurality of first images corresponding to the topographic map layers one by one to obtain images matched with the transformation progress of the virtual scene.
For example, for any one of plots in the virtual scene, each plot may have at least one topographic layer, for a certain topographic layer a, a plurality of texture sampling values of the synthesized normal map and a plurality of texture sampling values of the synthesized albedo map may be obtained, a first image of the corresponding topographic layer a may be rendered based on the plurality of texture sampling values of the synthesized normal map and the plurality of texture sampling values of the synthesized albedo map, when the plot has the topographic layer a and the topographic layer B, the first image of the topographic layer a and the first image of the topographic layer B are fused, and when the first image of the topographic layer a and the first image of the topographic layer B are fused, for example, for a pixel at the upper left corner of the first image of the topographic layer a and a pixel at the upper left corner of the first image of the topographic layer B, color values of the two pixels are fused according to weights of the topographic layer B, and color values of the pixels at the upper left corner of the image are obtained.
Inserting pixels of the result map on the basis of the initial map to obtain a composite map of the topographic map layer, wherein two pixels at the same position in the initial map and the result map are adjacent in position in the composite map and form adjacent pixel pairs; and (3) sampling the synthetic map according to the texture offset points of each adjacent pixel pair to obtain texture sampling values of the topographic map layer, wherein the texture sampling values are obtained by calculation according to a bilinear filtering mechanism, and the texture offset points are obtained according to the scene transformation progress, so that the texture sampling values of the texture offset points can accurately represent the weight mixing result of the adjacent pixel pairs and the scene transformation progress adaptation, which is equivalent to replacing two samples by a single sample, and the map sampling efficiency in the scene transformation process is improved.
In the following, an exemplary application of the embodiments of the present application in a practical application scenario will be described.
In some embodiments, the client is a game client, and the description is given taking a case that the scene change is summer to winter as an example, before the terminal runs the client (in an offline state), the server acquires a grassland map of the topographic layer before the virtual scene change, and acquires a snowfield map of the topographic layer after the virtual scene change; the method comprises the steps that a server inserts pixels of a snowfield map on the basis of the grassfield map to obtain a synthesized map of a topographic map layer, two pixels which are positioned at the same position in the grassfield map and the snowfield map are adjacent in position in the synthesized map and form adjacent pixel pairs, after an account logs in a client (such as a game application of a network edition) operated by a terminal, the client is in an operating state (on-line state), the client receives a scene conversion request of a user and sends the scene conversion request to the server, the server determines texture offset points of each adjacent pixel pair according to the conversion progress of a virtual scene, the progress is the time between the real-time moment and the moment of receiving the scene conversion request, and samples and processes the synthesized map of the topographic map layer according to each texture offset point to obtain texture sampling values of the topographic map layer, and the texture sampling values are calculated based on a bilinear filtering mechanism; and performing rendering processing based on the texture sampling values of the composite map to obtain an image matched with the transformation progress of the virtual scene, and returning the image to the terminal by the server for presentation.
According to the embodiment of the application, the mapping of the topographic map layers in different seasons is processed offline through the special pixel arrangement mode, so that the mapping sampling number of the topographic map layers is reduced by half by utilizing the characteristic of bilinear filtering when the topography in the seasonal conversion process is rendered in real time, and the rendering efficiency is improved.
In some embodiments, the display effect of the terrain in the virtual scene needs to be gradually shifted from one season to another season within a set time (e.g., 3 seconds). Taking a gradual change from summer to winter as an example, referring to fig. 5A-5C, fig. 5A-5C are schematic interface diagrams of an image processing method of a virtual scene of the virtual scene provided in an embodiment of the present application, fig. 5A shows a land with a grass land as a main part in summer, and at a certain point in the season change process, the land becomes a land with a mixed grass and snow as shown in fig. 5B, and finally the land with a snow land as a main part as shown in fig. 5C is gradually changed.
In some embodiments, the map of the topographic layer (Terrain layer) is processed offline, for example, for one of the topographic layer T1 and the topographic layer T2 of a certain topography, where the dimensions of the Albedo (Albedo) map of the topographic layer T1 and the topographic layer T2 are n×n, n is a positive integer, a (u, v) is a pixel of the Albedo map of the topographic layer T1 at coordinates (u, v), and B (u, v) is a pixel of the Albedo map of the topographic layer T2 at coordinates (u, v). And synthesizing the Albedo map of the topographic map layer T1 and the Albedo map of the topographic map layer T2 which are obtained through offline treatment into a synthetic map M, wherein the length of the synthetic map M is twice as long as that of the original Albedo map, the width of the synthetic map M is the same as that of the original Albedo map, and the size of the synthetic map M is 2n x n.
Referring to fig. 6, fig. 6 is a schematic pixel arrangement diagram of an image processing method for a virtual scene according to an embodiment of the present application, where a pixel a and a pixel B are arranged according to an arrangement manner shown in fig. 6, where the pixel a is arranged at a position with coordinates (2 u, v) in a composite map M, and the pixel B is arranged at a position with coordinates (2u+1, v) in the composite map M, that is, the pixel a and the pixel B are arranged as two adjacent pixels in the composite map. Referring to fig. 7, fig. 7 is a schematic diagram of pixel arrangement of a composite map provided in an embodiment of the present application, and an arrangement of each pixel in the composite map M is shown in fig. 7.
Referring to fig. 8, fig. 8 is an interface schematic diagram of an image processing method of a virtual scene of the virtual scene provided in the embodiment of the present application, and two actual original stickers (with a single size of 512×512) are obtained after offline synthesis processing to obtain a synthesized stickers (with a single size of 1024×512). Each topographic layer of the terrain needs to be processed once in the manner described above, and n composite maps are obtained for n topographic layers in a one-to-one correspondence.
In some embodiments, the sampling position is calculated in real time according to the progress of the seasonal change at the time of game running, or the above two pixels a and B are taken as an example for explanation, and at the real time t, the weights of mixing the pixel a and the pixel B are WA and WB, respectively. In the composite map M obtained after the offline processing, the result obtained by mixing the pixel a and the pixel B with WA and WB as weights can be obtained by calculating and sampling the appropriate texture coordinates based on the bilinear filtering principle, which is that a weighted average of the color values of 4 adjacent pixels near the sampling position can be obtained, so that in the embodiment of the present application, only one sampling is performed, a weighted average of the color values of 2 adjacent pixels near the sampling position can be obtained, that is, the result obtained by mixing the pixel a and the pixel B with WA and WB as weights.
Referring to fig. 9, fig. 9 is an interface schematic diagram of an image processing method of a virtual scene of the virtual scene provided in the embodiment of the present application, two end points of a dotted line are center points of a pixel a and a pixel B, an M point is a sampling position, texture sampling is performed at the M point, a color value of the M point can be obtained, and the color value of the M point is a result obtained by mixing the pixel a and the pixel B with WA and WB as weights. The sampling position can be described by the distance x between the texture offset point and the pixel a, and the calculation formula of the texture offset point value x is x= T t T is the real time instant, and T is the complete transformation time consumption of the virtual scene.
In some embodiments, for each layer of the terrain, the same texture offset point value is sampled in the composite map obtained after offline processing to obtain color values for the sampling locations for subsequent blending calculations.
Referring to fig. 10, fig. 10 is a flowchart of an image processing method of a virtual scene provided in an embodiment of the present application. In the off-line processing process, in step 1001, all the topographic map layers are acquired, in step 1002, the Albedo map and the normal map of a topographic map layer before the seasonal change are acquired, in step 1003, the Albedo map and the normal map of the topographic map layer after the seasonal change are acquired, in step 1004, the Albedo map before and after the seasonal change is combined in a set pixel arrangement manner to obtain a composite Albedo map, in step 1005, the normal map before and after the seasonal change is combined in a set pixel arrangement manner to obtain a composite normal map, when there is an unprocessed topographic map layer, step 1002 is executed, and when there is no unprocessed topographic map layer, the off-line processing process is ended. During game play, texture offset point values are calculated according to the seasonal change schedule in step 1006, a composite map of all the topographic map layers is acquired in step 1007, albedo maps are sampled according to the texture offset point values in step 1008, normal maps are sampled according to the texture offset point values in step 1009, when there are unprocessed topographic map layers, step 1008 is executed, when there are no unprocessed topographic map layers, step 1010 is executed, and in step 1010, a mixture process and a lighting process of a plurality of topographic map layers are performed based on the sampling result.
According to the embodiment of the application, the maps of different seasons of the topographic map layer are processed offline, the bilinear filtering characteristic is utilized, the Albedo maps and the normal maps are sampled during real-time rendering, and a single sampling can obtain a result of mixing the two seasons, so that the number of the map samples of a certain topographic map layer is reduced by one half, and the rendering efficiency of the topography is improved.
Continuing with the description below of an exemplary architecture of the image processing device 455 implemented as a software module for a virtual scene provided in embodiments of the present application, in some embodiments, as shown in fig. 2, the software modules stored in the image processing device 455 for a virtual scene of the memory 450 may include: the obtaining module 4551 is configured to obtain an initial map of the topographic layer before the virtual scene transformation, and obtain a result map of the topographic layer after the virtual scene transformation; a synthesis module 4552, configured to insert pixels of the result map on the basis of the initial map to obtain a synthesis map of the topographic map layer, where two pixels at the same position in the initial map and the result map are adjacent in position in the synthesis map and form an adjacent pixel pair; the sampling module 4553 is configured to determine a texture offset point of each adjacent pixel pair according to a transformation progress of the virtual scene, and sample a composite map of the topographic map layer according to each texture offset point to obtain a texture sampling value of the topographic map layer, where the texture sampling value is calculated based on a bilinear filtering mechanism; and the rendering module 4554 is used for performing rendering processing based on the texture sampling values of the composite map to obtain an image matched with the transformation progress of the virtual scene.
In some embodiments, the acquiring module 4551 is further configured to: and acquiring an initial albedo map of the topographic map layer before the virtual scene transformation, and acquiring an initial normal map of the topographic map layer before the virtual scene transformation, wherein the initial albedo map and the initial normal map are used as initial maps.
In some embodiments, the acquiring module 4551 is further configured to: and obtaining the result albedo mapping of the topographic layer after the virtual scene transformation, obtaining the result normal mapping of the topographic layer after the virtual scene transformation, and taking the result albedo mapping and the result normal mapping as the result mapping.
In some embodiments, the synthesizing module 4552 is further configured to: the following is performed for the first pixel of the initial map with an abscissa n and an ordinate m: taking 2n as the abscissa of the first pixel in the composite map and m as the ordinate of the first pixel in the composite map; the following is performed for the second pixel with the abscissa n and the ordinate m in the result map: 2n+1 is taken as the abscissa of the second pixel in the composite map and m is taken as the ordinate of the second pixel in the composite map; the lengths of the initial mapping and the result mapping are N, the widths of the initial mapping and the result mapping are M, N and M are integers which are more than or equal to 2, the value range of N is more than or equal to 0 and less than or equal to N-1, and the value range of M is more than or equal to 0 and less than or equal to M-1; the composite map is generated based on the abscissa and the ordinate of each first pixel in the initial map in the composite map and the abscissa and the ordinate of each second pixel in the resulting map in the composite map.
In some embodiments, the synthesizing module 4552 is further configured to: the following processing is performed for each adjacent pixel pair: acquiring a connecting line between a first center of a first pixel and a second center of a second pixel in adjacent pixel pairs, and determining a first distance from the midpoint of the connecting line to the left edge of the composite map; multiplying the first distance by the value of the transformation progress, and normalizing the multiplied result to obtain a first corrected distance; acquiring an offset distance positively correlated to the first correction distance; and taking a point on the connecting line, which is an offset distance from the first center, as a texture offset point.
In some embodiments, the synthesizing module 4552 is further configured to: the following is performed for the first pixel of the initial map with an abscissa n and an ordinate m: taking n as the abscissa of the first pixel in the composite map and 2m as the ordinate of the first pixel in the composite map; the following is performed for the second pixel with the abscissa n and the ordinate m in the result map: taking n as the abscissa of the second pixel in the composite map and 2m+1 as the ordinate of the second pixel in the composite map; the lengths of the initial mapping and the result mapping are N, the widths of the initial mapping and the result mapping are M, N and M are integers which are more than or equal to 2, the value range of N is more than or equal to 0 and less than or equal to N-1, and the value range of M is more than or equal to 0 and less than or equal to M-1; the composite map is generated based on the abscissa and the ordinate of each first pixel in the initial map in the composite map and the abscissa and the ordinate of each second pixel in the resulting map in the composite map.
In some embodiments, the synthesizing module 4552 is further configured to: the following processing is performed for each adjacent pixel pair: acquiring a connecting line between a first center of a first pixel and a second center of a second pixel in adjacent pixel pairs, and determining a second distance from the middle point of the connecting line to the upper edge of the composite map; multiplying the second distance by the value of the transformation progress, and normalizing the multiplied result to obtain a second corrected distance; acquiring an offset distance positively correlated to the second correction distance; and taking a point on the connecting line, which is an offset distance from the first center, as a texture offset point.
In some embodiments, sampling module 4553 is further to: acquiring a connecting line between a first center of a first pixel and a second center of a second pixel in adjacent pixel pairs, wherein the first pixel is a pixel of an initial mapping, and the second pixel is a pixel of a result mapping; acquiring an offset distance positively correlated to the transformation progress; and determining a point on the connecting line, which is an offset distance from the first center, as a texture offset point of the adjacent pixel pair.
In some embodiments, sampling module 4553 is further to: before determining texture offset points of each adjacent pixel pair according to the transformation progress of the virtual scene, acquiring complete transformation time consumption of the virtual scene, and acquiring time length between real-time and starting time of scene transformation; and obtaining the ratio between the time length and the complete time consumption, and determining the ratio as a transformation progress.
In some embodiments, sampling module 4553 is further to: the following is performed for each composite map of the terrain map layer: and sampling the synthetic map according to each texture offset point to obtain a color value of each texture offset point in the synthetic map, and taking the color value of each texture offset point as a texture sampling value of the synthetic map.
In some embodiments, sampling module 4553 is further to: sampling the synthetic map according to each texture offset point, and executing the following processing for each texture offset point before obtaining the color value of each texture offset point in the synthetic map: acquiring a first color value of a first pixel and a second color value of a second pixel in the composite map, wherein the first pixel and the second pixel are derived from adjacent pixel pairs corresponding to texture offset points; acquiring a third distance between the texture offset point and the first pixel and a fourth distance between the texture offset point and the second pixel; acquiring a first weight inversely related to the third distance and a second weight inversely related to the fourth distance; acquiring a first multiplication result of the first weight and the first color value and a second multiplication result of the second weight and the second color value; and adding the first multiplication result and the second multiplication result to be a color value of the texture offset point.
In some embodiments, the synthesizing module 4552 is further configured to: inserting pixels of the result albedo map on the basis of the initial albedo map to obtain a composite albedo map of the topographic map layer; inserting pixels of the result normal map on the basis of the initial normal map to obtain a synthesized normal map of the topographic map layer; and taking the synthesized albedo map and the synthesized normal map as the synthesized map of the topographic map layer.
In some embodiments, rendering module 4554 is further to: when the number of topographical layers is plural, the following processing is performed for each topographical layer: performing rendering processing on texture sampling values of each adjacent pixel pair in the synthesized map of the topographic map layer to obtain a first image matched with the transformation progress of the virtual scene; and carrying out fusion processing on a plurality of first images corresponding to the topographic map layers one by one to obtain images matched with the transformation progress of the virtual scene.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the image processing method of the virtual scene according to the embodiment of the application.
The embodiments of the present application provide a computer-readable storage medium storing executable instructions, in which the executable instructions are stored, which when executed by a processor, cause the processor to perform an image processing method of a virtual scene provided by the embodiments of the present application, for example, an image processing method of a virtual scene as shown in fig. 4A to 4C.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in one or more scripts in a hypertext markup language (HTML, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or, alternatively, distributed across multiple sites and interconnected by a communication network.
In summary, according to the embodiment of the present application, the pixels of the result map are inserted on the basis of the initial map, so as to obtain the composite map of the topographic map layer, where two pixels at the same position in the initial map and the result map are adjacent in position in the composite map and form an adjacent pixel pair; and (3) sampling the synthetic map according to the texture offset points of each adjacent pixel pair to obtain texture sampling values of the topographic map layer, wherein the texture sampling values are obtained by calculation according to a bilinear filtering mechanism, and the texture offset points are obtained according to the scene transformation progress, so that the texture sampling values of the texture offset points can accurately represent the weight mixing result of the adjacent pixel pairs and the scene transformation progress adaptation, which is equivalent to replacing two samples by a single sample, and the map sampling efficiency in the scene transformation process is improved.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and scope of the present application are intended to be included within the scope of the present application.

Claims (15)

1. A method of image processing of a virtual scene, the method comprising:
acquiring an initial mapping of a topographic layer before the virtual scene is transformed, and acquiring a result mapping of the topographic layer after the virtual scene is transformed;
inserting pixels of the result map on the basis of the initial map to obtain a composite map of the topographic map layer, wherein two pixels at the same position in the initial map and the result map are adjacent in position in the composite map and form adjacent pixel pairs;
determining a texture offset point of each adjacent pixel pair according to the transformation progress of the virtual scene, and sampling the synthesized map of the topographic map layer according to each texture offset point to obtain a texture sampling value of each adjacent pixel pair in the synthesized map, wherein the texture sampling value is calculated based on a bilinear filtering mechanism;
and performing rendering processing based on the texture sampling values of the composite map to obtain an image matched with the transformation progress of the virtual scene.
2. The method of claim 1, wherein the inserting pixels of the resulting map based on the initial map results in a composite map of the topographical layer, comprising:
The following processing is performed for a first pixel with an abscissa n and an ordinate m in the initial map: taking n as the abscissa of the first pixel in the composite map and 2m as the ordinate of the first pixel in the composite map;
the following processing is performed for a second pixel with an abscissa n and an ordinate m in the result map: taking n as the abscissa of the second pixel in the composite map and 2m+1 as the ordinate of the second pixel in the composite map;
the lengths of the initial mapping and the result mapping are N, the widths of the initial mapping and the result mapping are M, N and M are integers which are more than or equal to 2, the value range of N is more than or equal to 0 and less than or equal to N < N, and the value range of M is more than or equal to 0 and less than or equal to M < M;
the composite map is generated based on the abscissa and the ordinate of each of the first pixels in the initial map in the composite map and the abscissa and the ordinate of each of the second pixels in the resulting map in the composite map.
3. The method of claim 1, wherein the inserting pixels of the resulting map based on the initial map results in a composite map of the topographical layer, comprising:
The following processing is performed for a first pixel with an abscissa n and an ordinate m in the initial map: taking 2n as the abscissa of the first pixel in the composite map and m as the ordinate of the first pixel in the composite map;
the following processing is performed for a second pixel with an abscissa n and an ordinate m in the result map: 2n+1 as the abscissa of the second pixel in the composite map and m as the ordinate of the second pixel in the composite map;
the lengths of the initial mapping and the result mapping are N, the widths of the initial mapping and the result mapping are M, N and M are integers which are more than or equal to 2, the value range of N is more than or equal to 0 and less than or equal to N < N-1, and the value range of M is more than or equal to 0 and less than or equal to M < M-1;
the composite map is generated based on the abscissa and the ordinate of each of the first pixels in the initial map in the composite map and the abscissa and the ordinate of each of the second pixels in the resulting map in the composite map.
4. A method according to claim 3, wherein said determining a texture offset point for each of said adjacent pixel pairs according to a transformation schedule of said virtual scene comprises:
The following processing is performed for each of the adjacent pixel pairs:
acquiring a connecting line between a first center of the first pixel and a second center of the second pixel in the adjacent pixel pair, and determining a first distance from a midpoint of the connecting line to a left edge of the composite map;
multiplying the first distance with the numerical value of the transformation progress, and normalizing the multiplication result to obtain a first corrected distance;
acquiring an offset distance positively correlated to the first corrected distance;
and taking a point on the connecting line, the distance from the first center of which is the offset distance, as the texture offset point.
5. The method of claim 1, wherein said determining texture offset points for each of said adjacent pixel pairs according to a transformation schedule of said virtual scene comprises:
the following processing is performed for each of the adjacent pixel pairs:
acquiring a connection line between a first center of a first pixel and a second center of a second pixel in the adjacent pixel pair, wherein the first pixel is a pixel of the initial map and the second pixel is a pixel of the result map;
acquiring an offset distance positively correlated to the transformation progress;
And determining a point on the connecting line, which is the offset distance from the first center, as a texture offset point of the adjacent pixel pair.
6. The method of claim 1, 4 or 5, wherein prior to determining the texture offset point for each of the adjacent pixel pairs according to the transformation schedule of the virtual scene, the method further comprises:
acquiring the complete transformation time consumption of the virtual scene, and acquiring the duration between the real-time moment and the starting moment of the scene transformation;
and obtaining the ratio between the time length and the complete time consumption, and determining the ratio as the transformation progress.
7. The method of claim 1, wherein the sampling the composite map of the terrain layer according to each texture offset point to obtain texture sample values of each adjacent pixel pair in the composite map comprises:
the following is performed for each of the synthetic maps of the terrain map layer:
and sampling the synthesized map according to each texture offset point to obtain a color value of each texture offset point in the synthesized map, and taking the color value of each texture offset point as a texture sampling value of the synthesized map.
8. The method of claim 7, wherein before sampling the composite map according to each of the texture offset points to obtain a color value for each of the texture offset points in the composite map, the method further comprises:
the following is performed for each of the texture offset points:
acquiring a first color value of a first pixel and a second color value of a second pixel in the synthesis map, wherein the first pixel and the second pixel are derived from adjacent pixel pairs corresponding to the texture offset point;
acquiring a third distance between the texture offset point and the first pixel and a fourth distance between the texture offset point and the second pixel;
acquiring a first weight inversely related to the third distance and a second weight inversely related to the fourth distance;
acquiring a first multiplication result of the first weight and the first color value and a second multiplication result of the second weight and the second color value;
and adding the first multiplication result and the second multiplication result to be used as a color value of the texture offset point.
9. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The obtaining the initial mapping of the topographic map layer before the virtual scene transformation comprises the following steps:
acquiring an initial albedo map of the topographic layer before the virtual scene transformation, acquiring an initial normal map of the topographic layer before the virtual scene transformation, and taking the initial albedo map and the initial normal map as the initial map;
the obtaining the result map of the topographic map layer after the virtual scene transformation comprises the following steps:
and obtaining a result albedo map of the topographic layer after the virtual scene is transformed, obtaining a result normal map of the topographic layer after the virtual scene is transformed, and taking the result albedo map and the result normal map as the result map.
10. The method of claim 9, wherein the inserting pixels of the resulting map based on the initial map to obtain a composite map of the topographical layer comprises:
inserting pixels of the result albedo map on the basis of the initial albedo map to obtain a composite albedo map of the topographic map layer;
inserting pixels of the result normal map on the basis of the initial normal map to obtain a synthesized normal map of the topographic map layer;
And taking the synthesized albedo map and the synthesized normal map as synthesized maps of the topographic map layer.
11. The method of claim 1, wherein performing a rendering process based on texture sample values of the composite map to obtain an image matching a transformation progress of the virtual scene comprises:
when the number of the topographic layers is plural, the following processing is performed for each of the topographic layers:
performing rendering processing on the texture sampling value of each adjacent pixel pair in the synthesized map of the topographic map layer to obtain a first image matched with the transformation progress of the virtual scene;
and carrying out fusion processing on the plurality of first images corresponding to the topographic map layers one by one to obtain images matched with the transformation progress of the virtual scene.
12. An image processing apparatus for a virtual scene, the apparatus comprising:
the acquisition module is used for acquiring an initial mapping of the topographic layer before the virtual scene transformation and acquiring a result mapping of the topographic layer after the virtual scene transformation;
a synthesis module, configured to insert pixels of the result map on the basis of the initial map, to obtain a synthesis map of the topographic map layer, where two pixels located at the same position in the initial map and the result map are adjacent in position in the synthesis map and form an adjacent pixel pair;
The sampling module is used for determining a texture offset point of each adjacent pixel pair according to the transformation progress of the virtual scene, and sampling the synthesized map of the topographic map layer according to each texture offset point to obtain a texture sampling value of each adjacent pixel pair in the synthesized map, wherein the texture sampling value is calculated based on a bilinear filtering mechanism;
and the rendering module is used for executing rendering processing based on the texture sampling value of the composite map to obtain an image matched with the transformation progress of the virtual scene.
13. An electronic device, the electronic device comprising:
a memory for storing executable instructions;
a processor for implementing the image processing method of a virtual scene according to any of claims 1 to 11 when executing executable instructions stored in said memory.
14. A computer readable storage medium storing executable instructions which when executed by a processor implement the method of image processing of a virtual scene according to any one of claims 1 to 11.
15. A computer program product comprising a computer program or instructions which, when executed by a processor, implements the method of image processing of a virtual scene as claimed in any one of claims 1 to 11.
CN202210805313.9A 2022-07-08 2022-07-08 Image processing method, image processing apparatus, electronic device, storage medium, and program product Pending CN117409128A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210805313.9A CN117409128A (en) 2022-07-08 2022-07-08 Image processing method, image processing apparatus, electronic device, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210805313.9A CN117409128A (en) 2022-07-08 2022-07-08 Image processing method, image processing apparatus, electronic device, storage medium, and program product

Publications (1)

Publication Number Publication Date
CN117409128A true CN117409128A (en) 2024-01-16

Family

ID=89494975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210805313.9A Pending CN117409128A (en) 2022-07-08 2022-07-08 Image processing method, image processing apparatus, electronic device, storage medium, and program product

Country Status (1)

Country Link
CN (1) CN117409128A (en)

Similar Documents

Publication Publication Date Title
CN107358649B (en) Processing method and device of terrain file
CN111145326B (en) Processing method of three-dimensional virtual cloud model, storage medium, processor and electronic device
CN107358643A (en) Image processing method, device, electronic equipment and storage medium
CN112215934A (en) Rendering method and device of game model, storage medium and electronic device
CN107632824A (en) A kind of generation method of augmented reality module, generating means and generation system
CN112675545B (en) Method and device for displaying surface simulation picture, storage medium and electronic equipment
CN111583378B (en) Virtual asset processing method and device, electronic equipment and storage medium
CN111798554A (en) Rendering parameter determination method, device, equipment and storage medium
CN113470092B (en) Terrain rendering method and device, electronic equipment and storage medium
CN110990106B (en) Data display method and device, computer equipment and storage medium
CN117390322A (en) Virtual space construction method and device, electronic equipment and nonvolatile storage medium
WO2023173828A1 (en) Scene element processing method and apparatus, device, and medium
CN113192173B (en) Image processing method and device of three-dimensional scene and electronic equipment
CN117409128A (en) Image processing method, image processing apparatus, electronic device, storage medium, and program product
CN116452720A (en) Rendering graph generation method, rendering graph generation device, computer equipment and medium thereof
CN115641397A (en) Method and system for synthesizing and displaying virtual image
WO2021155688A1 (en) Picture processing method and device, storage medium, and electronic apparatus
CN114399580A (en) Image rendering method, device, equipment, storage medium and program product
Sancak et al. Photogrammetric model optimization in digitalization of architectural heritage: Yedikule fortress
CN117437346A (en) Image processing method, image processing apparatus, electronic device, storage medium, and program product
WO2023216771A1 (en) Virtual weather interaction method and apparatus, and electronic device, computer-readable storage medium and computer program product
CN115937389A (en) Shadow rendering method, device, storage medium and electronic equipment
CN114904271A (en) Color gradient map generation method and device, electronic equipment and storage medium
CN117392305A (en) Mapping processing method and device, storage medium and electronic device
CN114882202A (en) Animation processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination