CN115512022A - Data processing method and device, electronic equipment and storage medium - Google Patents

Data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115512022A
CN115512022A CN202211193764.8A CN202211193764A CN115512022A CN 115512022 A CN115512022 A CN 115512022A CN 202211193764 A CN202211193764 A CN 202211193764A CN 115512022 A CN115512022 A CN 115512022A
Authority
CN
China
Prior art keywords
target
target object
map
determining
animation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211193764.8A
Other languages
Chinese (zh)
Inventor
包泽华
黎小凤
杨继昌
董琦
陈憬夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211193764.8A priority Critical patent/CN115512022A/en
Publication of CN115512022A publication Critical patent/CN115512022A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure provides a data processing method, a data processing device, an electronic device and a storage medium, wherein the method comprises the following steps: generating a target topographic map based on the triggering operation in the target area; wherein the target topographic map comprises at least one grid cell; when the triggering operation of the target object is detected to be updating the display content in the target topographic map, determining a target grid cell corresponding to the triggering operation; and rendering the target object onto the target grid unit based on the object parameter of the target object and the environment parameter of the environment to which the target object belongs so as to update the display content. According to the technical scheme provided by the embodiment of the disclosure, the effect of constructing the target topographic map in the display interface in real time when the performance parameters of the terminal device are low is realized, meanwhile, the interactivity between the user and the display interface is also improved, the effect of constructing the display content corresponding to the user requirement is achieved, and the use experience of the user is improved.

Description

Data processing method and device, electronic equipment and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, and in particular, to a data processing method and apparatus, an electronic device, and a storage medium.
Background
With the continuous development of virtual technologies, more and more application programs can realize the construction process of a virtual model, obtain a target model corresponding to an object, and render the object into a display interface based on the target model.
In the prior art, one or more objects are generally rendered into a display interface, a virtual scene corresponding to each object cannot be constructed under the condition that the performance of a terminal device is limited, or when the virtual scene corresponding to each object is constructed, only a pre-constructed virtual scene can be rendered into the display interface, and cannot be updated based on user requirements, so that the problem of poor user experience may be caused.
Disclosure of Invention
The present disclosure provides a data processing method, an apparatus, an electronic device, and a storage medium, so as to implement real-time construction of a target topographic map, and at the same time, improve interactivity between a user and a display interface, achieve an effect of constructing display content corresponding to a user's demand, and improve user experience.
In a first aspect, an embodiment of the present disclosure provides a data processing method, where the method includes:
generating a target topographic map based on the triggering operation in the target area; wherein the target topography map comprises at least one grid cell;
when the triggering operation of a target object is detected to update the display content in the target topographic map, determining a target grid unit corresponding to the triggering operation;
rendering the target object onto the target grid cell based on the object parameter of the target object and the environment parameter of the environment to which the target object belongs, so as to update the display content.
In a second aspect, an embodiment of the present disclosure further provides a data processing apparatus, where the apparatus includes:
the target topographic map generating module is used for generating a target topographic map based on the trigger operation in the target area; wherein the target topography map comprises at least one grid cell;
the target grid cell determining module is used for determining a target grid cell corresponding to a trigger operation when the trigger operation on a target object is detected to update display content in the target topographic map;
and the display content updating module is used for rendering the target object to the target grid unit based on the object parameter of the target object and the environment parameter of the environment to which the target object belongs so as to update the display content.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device to store one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the data processing method according to any one of the embodiments of the present disclosure.
In a fourth aspect, the embodiments of the present disclosure also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are used for executing the data processing method according to any one of the embodiments of the present disclosure.
According to the technical scheme of the embodiment of the disclosure, the target topographic map is generated based on the trigger operation in the target area, further, when the trigger operation on the target object is detected to update the display content in the target topographic map, the target grid unit corresponding to the trigger operation is determined, and finally, the target object is rendered onto the target grid unit based on the object parameter of the target object and the environment parameter of the environment to update the display content.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of a data processing method provided in an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a target topography provided by embodiments of the present disclosure;
FIG. 3 is a schematic diagram of a target topography provided by embodiments of the present disclosure;
FIG. 4 is a schematic diagram of a three-dimensional coordinate system provided by an embodiment of the disclosure;
FIG. 5 is a diagram illustrating an index value determination method according to an embodiment of the disclosure;
FIG. 6 is a flow chart of a data processing method provided by an embodiment of the disclosure;
FIG. 7 is a flow chart of a data processing method provided by an embodiment of the disclosure;
FIG. 8 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and the embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein is intended to be open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
It is understood that, before the technical solutions disclosed in the embodiments of the present disclosure are used, the user should be informed of the type, the use range, the use scene, etc. of the personal information related to the present disclosure in a proper manner according to the relevant laws and regulations and obtain the authorization of the user.
For example, in response to receiving an active request from a user, a prompt message is sent to the user to explicitly prompt the user that the requested operation to be performed would require the acquisition and use of personal information to the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server, or a storage medium that performs the operations of the disclosed technical solution, according to the prompt information.
As an optional but non-limiting implementation manner, in response to receiving an active request from the user, the manner of sending the prompt information to the user may be, for example, a pop-up window, and the prompt information may be presented in a text manner in the pop-up window. In addition, a selection control for providing personal information to the electronic device by the user's selection of "agreeing" or "disagreeing" can be carried in the pop-up window.
It is understood that the above notification and user authorization process is only illustrative and not limiting, and other ways of satisfying relevant laws and regulations may be applied to the implementation of the present disclosure.
It will be appreciated that the data involved in the subject technology, including but not limited to the data itself, the acquisition or use of the data, should comply with the requirements of the corresponding laws and regulations and related regulations.
Before the technical solution is introduced, an application scenario may be exemplarily described. The technical scheme of the embodiment of the disclosure can be applied to scenes creating a scene virtual space or a planet in any display interface based on the pre-created special effect prop.
It should be noted that the technical solution of the embodiment of the present disclosure may be implemented based on at least one special effect prop pre-generated in an application program without the participation of an engine with higher performance. For example, when a virtual space or a planet is created based on a related application program or application software, based on the technical solution provided by the embodiment of the present disclosure, a special effect item for creating a virtual planet may be generated in advance, when a trigger to the special effect item is detected, a topographic map including at least one grid unit may be generated in real time in a current display interface, and when a trigger operation to a target object is detected, the target object is rendered into the topographic map to update display content in the topographic map, so that an effect of creating a corresponding virtual space or a planet based on a user demand may be achieved.
Fig. 1 is a flowchart of a data processing method provided by an embodiment of the present disclosure, where the embodiment of the present disclosure is suitable for generating a target topographic map based on a triggering operation of a target interface, and rendering the target object into the target topographic map based on the triggering operation on the target object, where the method may be executed by an information display apparatus, and the apparatus may be implemented in a form of software and/or hardware, and optionally, implemented by an electronic device, where the electronic device may be a mobile terminal, a PC terminal, a server, or the like.
The device for executing the method for generating the special effect video provided by the embodiment of the present disclosure may be integrated into application software supporting a special effect video processing function, and the software may be installed in an electronic device, and optionally, the electronic device may be a mobile terminal or a PC terminal, and the like. The application software may be a type of software for processing images/videos, and specific application software thereof is not described herein any more, as long as image/video processing can be implemented. The method can also be a specially developed application program to realize software for adding and displaying the special effects, or be integrated in a corresponding page, and a user can realize the processing of the special effect video through the page integrated in the PC terminal.
In the embodiment of the present disclosure, the integration based on the above-mentioned manner is advantageous in that, in the case where the performance of the terminal device is limited, that is, in the case where no special game engine is required to participate, the target terrain may be created and the corresponding target object may be set on the target terrain.
As shown in fig. 1, the method includes:
and S110, generating a target topographic map based on the trigger operation in the target area.
In this embodiment, the target area may be an area in the display interface where a virtual star needs to be created. The target area may be any area in the display interface where a corresponding virtual star may be created. Illustratively, the target area may be a center area in the display interface. In the actual application process, a trigger control can be set in any area in the display interface in advance, when the trigger operation of the trigger control of the target area is detected, the trigger operation can be responded, and a corresponding target topographic map is created by taking the target area as the center and taking any distance as the radius. Wherein, the target topographic map can be an image for representing the basic contour of the planet ground. In the actual application process, the terrain displayed in the target terrain map may be a basic terrain, and the virtual planet meeting the user requirements can be obtained by performing corresponding processing on the content displayed in the target terrain map.
Wherein, the target topographic map comprises at least one grid unit. Alternatively, the grid cells may be divided into two types. One type may be: taking one of the grid cells as the center, the grid cell having the same distance to the other grid cells adjacent to the grid cell may also be understood as a symmetric grid cell, for example, a hexagonal grid cell; another type may be: with one of the grid cells as the center, the grid cell has a different distance to other grid cells adjacent to it, e.g., a quadrilateral grid cell.
Different types of grid cells, and corresponding benefits in creating the target topography map, may be described next in connection with fig. 2 and 3, respectively.
For example, as shown in fig. 2, a quadrilateral mesh unit is taken as an example to describe an asymmetric mesh unit. The target topographic map may include 9 quadrilateral mesh cells, respectively quadrilateral mesh cell 1, quadrilateral mesh cell 2, quadrilateral mesh cell 3, quadrilateral mesh cell 4, quadrilateral mesh cell 5, quadrilateral mesh cell 6, quadrilateral mesh cell 7, quadrilateral mesh cell 8, and quadrilateral mesh cell 9. It can be seen that the distances from the quadrilateral mesh unit 6 at the center position to the quadrilateral mesh units adjacent to the quadrilateral mesh unit are different, so that the determination manners of the tiling positions of the adjacent quadrilateral mesh units are also different, at this time, when determining the tiling positions of the adjacent quadrilateral mesh units, the corresponding distance information needs to be determined, and the corresponding tiling position determination manner is selected based on the distance information, so that the position information of each quadrilateral mesh unit can be finally determined. The benefit of this arrangement is: the effect that whether the grid cells can be added at the corresponding positions can be determined based on the distance and the direction between the grid cells under the condition that the performance of the terminal equipment meets certain conditions is achieved.
Illustratively, as shown in fig. 3, a symmetric grid cell is illustrated by taking a hexagonal grid cell as an example. The target topographical map may include 7 hexagonal mesh cells, respectively hexagonal mesh cell 1, hexagonal mesh cell 2, hexagonal mesh cell 3, hexagonal mesh cell 4, hexagonal mesh cell 5, hexagonal mesh cell 6, and hexagonal mesh cell 7. Use hexagonal grid cell 1 as the center, other each hexagonal grid cell is hexagonal grid cell 1's adjacent cell, when confirming the central point of each hexagonal grid cell, the distance by hexagonal grid cell 1's central point to each adjacent hexagonal grid cell central point is equal, at this moment, when establishing corresponding target topography map based on at least one hexagonal grid cell, can need not to confirm the distance information between each hexagonal grid cell, only need confirm the direction information of each hexagonal grid cell when tiling, can be based on each direction information, confirm the tiling position of each hexagonal grid cell in target topography map, thereby can obtain target topography map. The advantages of such an arrangement are: the tiling position of each grid unit can be determined only based on the direction information, a certain calculation process is omitted, and the effect of quickly generating a corresponding target topographic map is achieved.
In a specific implementation, when a trigger operation for a target area displayed in a display interface is detected, the trigger operation may be responded, the target area is taken as a center, a tiling position of a first grid unit is determined, then, direction information of other grid units adjacent to the first grid unit is determined, so as to determine the tiling position of each adjacent grid unit based on each direction information, and further, the operation of determining the tiling positions of other adjacent grid units is repeatedly executed by taking any one of the grid units as the first grid unit until a target topographic map is generated.
It should be noted that, a threshold value of the number of grid cells included in the target topographic map may be preset, and in the process of constructing the target topographic map, when it is detected that the number of grid cells displayed in the display interface reaches the preset threshold value of the number, the topographic map created at this time may be used as the target topographic map.
And S120, when the trigger operation on the target object is detected to update the display content in the target topographic map, determining the target grid cell corresponding to the trigger operation.
In this embodiment, the target object may be an object displayed in the display interface or an object added to the display interface. In the application development stage, a plurality of objects can be preset as target objects, so that each target object can be displayed at a corresponding position in a display interface in actual application; or, in the actual application process, a plurality of objects can be added in succession based on the user requirements, and the objects are displayed in the display interface as target objects. Optionally, the target object includes at least one of trees, flowers, grass, tiles, water drops, users, houses, rocks, birds, and beasts.
The user can be a user head portrait corresponding to the target topographic map, or a virtual model constructed based on the user, or the like. A house may be an image used to characterize the basic shape of the house. In the application development stage, after the type of the target object is determined, a plurality of target objects may be generated in advance based on shape information of the target object in the real world, so that each target object is displayed on the display interface during the application process. The advantages of such an arrangement are: the diversity of the target topographic map can be enhanced, meanwhile, the target topographic map can be constructed based on user requirements, and the use experience of a user is improved.
In this embodiment, the display content may be content displayed in the target topographic map, or content subsequently added to the target topographic map. When the target topographic map is generated, the corresponding display content can be each grid unit included in the target topographic map; when the target topographic map is processed subsequently, the corresponding display content not only comprises each grid unit in the target topographic map generation stage, but also comprises content newly added to the target topographic map. Optionally, the displaying the content includes updating a display height of at least one grid cell in the target topographic map or adding the target object in at least one grid cell of the target topographic map.
The updating of the display height of at least one grid cell in the target topographic map may be implemented by adding or subtracting one or more grid cells to or from at least one grid cell, or may be implemented by increasing or decreasing the display height of at least one grid cell according to a preset ratio, which is not specifically limited in this embodiment of the present disclosure. The target object is added to at least one grid cell of the target topographic map, and may be displayed in the at least one grid cell.
In the actual application process, if the display content is the display height of at least one grid cell in the updated target topographic map, the corresponding trigger operation can be input again to the at least one grid cell of which the display height needs to be updated after the trigger operation is input to the target object; if the display content is to add a target object to at least one grid cell of the target topographic map, the corresponding trigger operation may be to drag the target object to at least one grid cell that needs to be updated while triggering the target object. The benefit of this arrangement is: the corresponding display content can be determined based on different trigger operations, so that the diversity and flexibility of the display content are improved, and the display effect of the display content in the target topographic map is improved.
For example, when the target object corresponding to the trigger operation is a floor tile, the corresponding display content may be to update the display height of at least one grid cell; when the target object corresponding to the trigger operation is a tree, the corresponding display content can be added with the tree in at least one grid unit.
It should be noted that, after the target topographic map is obtained, since the target topographic map includes at least one grid unit, and rendering parameters of each grid unit may be consistent, in this case, in a subsequent application process, a grid unit required by a user may not be located.
Based on this, after generating the target topography map, the method further comprises: and determining an index value corresponding to each grid cell in the target topographic map based on a pre-established cubic coordinate system, so as to determine a target grid cell corresponding to the trigger operation based on the index value.
It should be noted that, since each target topographic map is formed by at least one grid cell, each grid cell corresponds to six directions, and at this time, when describing the position information of each grid cell, the position description of each grid cell cannot be realized based on a cartesian coordinate system, that is, a coordinate system including an X axis and a Y axis, therefore, a cubic coordinate system may be introduced to determine the position information of each grid cell based on the cubic coordinate system.
In this embodiment, when a trigger operation on a target region is detected, each axis in a three-dimensional coordinate system is mapped into a two-dimensional plane with a grid cell corresponding to the target region as an origin to obtain a cubic coordinate system. The cubic coordinate system may include an X axis, a Y axis, and a Z axis. After the target topography is generated, coordinate information of each grid cell may be determined according to a pre-established cube coordinate system. For example, as shown in fig. 3, that is, a cube coordinate system established in advance, the coordinate values of each grid unit may be determined based on the cube coordinate system, so as to represent the position of each grid unit in the cube coordinate system based on each coordinate value. For example, the coordinates of the grid cell at the top right corner are (0, -3, -3).
Further, after the coordinate information of each grid cell is determined, the index value of each grid cell may be determined based on the arrangement rule of each coordinate information, so that when a trigger operation on the target object is detected, a target grid cell corresponding to the trigger operation may be determined based on the index value. The index value may be a numerical value used to characterize the sequence information of the corresponding grid cell in the target topographic map. The target grid cell may be a grid cell that is updated based on a target object corresponding to the trigger operation.
For example, as shown in fig. 4 and 5, the index value of the grid unit corresponding to the origin of coordinates (0, 0) may be set to 0, the index values of other grid units may be sequentially determined based on the arrangement rule and the direction information of the grid units adjacent to the grid unit, as shown in fig. 5, the grid unit corresponding to the origin of coordinates is taken as the starting point, the index values of the grid units on the right side of the grid unit are 1, 2, and 3, respectively, and the corresponding index values are sequentially determined according to the arrangement order of the coordinate information of the grid units, so that the index values corresponding to the grid units may be obtained. The advantages of such an arrangement are: the sequence information and the direction information of each grid unit in the target topographic map can be determined, and the target grid unit is quickly positioned, so that the display content in the target topographic map can be quickly updated.
In practical applications, after determining the index value of each grid cell, the target grid cell may be determined based on the index value, and the determination process of the target grid cell may be described next.
Optionally, determining a target grid cell corresponding to the trigger operation includes: determining target space information according to screen coordinate information corresponding to the trigger operation; and determining the target grid unit according to the index value corresponding to the target space information.
In this embodiment, each target object and the target topographic map are displayed in the display interface, a screen coordinate system corresponding to the current display interface may be established, when it is detected that the triggering operation on the target object is to update the display content in the target topographic map, the screen coordinate information, that is, the screen coordinate information, of the trigger point in the screen coordinate system may be determined, further, the screen coordinate information is converted into space coordinate information, that is, target space information, by a way of space coordinate transformation, a corresponding index value is determined by a coordinate value included in the target space information, and then, based on the index value, a grid unit corresponding to the index value is determined in the target topographic map, and the grid unit may be used as a target grid unit. The advantages of such an arrangement are: the target grid cell corresponding to the trigger operation can be quickly located, so that the effect of quickly rendering the target object to the target grid cell can be realized.
And S130, rendering the target object to the target grid unit based on the object parameter of the target object and the environment parameter of the environment to which the target object belongs so as to update the display content.
In this embodiment, the object parameter may be a parameter applied by the target object in the rendering process. Optionally, the object parameters include a normal line, a tangent line and a sub-tangent line constituting each surface of the target object. For example, when the target object is a tile and is a tile corresponding to a grid cell, the object parameters of the tile may be a normal line, a tangent line, and a sub-tangent line constituting the surface of the tile 8. The environment to which the target object belongs may be an environment to which the target object corresponds under different illumination. For example, the environment may include an external environment when the sun rises, an external environment when the sun falls, an external environment when the sun does not exist, and the like. Optionally, the environmental parameter comprises an illumination parameter. The illumination parameter may be illumination intensity or illumination angle.
It should be noted that, in the real world, when at least one object needs to be newly added on any ground, in the process, when the sun is at different irradiation angles, the surface roughness and the brightness of the newly added object are also different, and in order to simulate the rendering effect of the target object corresponding to different surface roughness and brightness under different illumination parameters in the display interface, the object parameters of the target object and the environment parameters of the environment to which the target object belongs may be introduced, so as to implement the rendering process of the target object based on the parameters.
Optionally, rendering the target object onto the target grid cell based on the object parameter of the target object and the environment parameter of the environment to which the target object belongs, includes: determining rendering parameters of each surface according to a normal line, a tangent line and a secondary tangent line corresponding to each surface of the target object and environment parameters of the environment to which the target object belongs; the target object is rendered onto the target grid cell based on the rendering parameters.
In this embodiment, the rendering parameter may be a parameter for setting and adjusting a rendering condition of the target object in the rendering process. Optionally, the rendering parameters include surface roughness and brightness parameters. The surface roughness may be a parameter for characterizing the degree of unevenness of each surface of the target object.
In practical application, object parameters such as a normal line, a tangent line and a sub-tangent line of each surface of a target object can be predetermined, the object parameters and the target object are stored correspondingly, when a trigger operation on the target object is detected, the corresponding object parameters can be called, meanwhile, environment parameters of an environment to which the target object belongs are determined, further, the object parameters and the environment parameters of each surface are processed through a central processing unit of a terminal device, rendering parameters of each surface can be obtained, and each rendering parameter is input into a graphic processor, so that the process of rendering the target object to a target grid unit based on the rendering parameters can be realized. The advantages of such an arrangement are: the rendering effect of the target object can be closer to the real effect, so that the display effect of the display content of the target topographic map on the display interface is improved, and the use experience of a user is improved.
Optionally, determining the rendering parameters of each surface according to the normal line, the tangent line, the secondary tangent line corresponding to each surface of the target object, and the environment parameters of the environment to which the target object belongs, includes: for each surface, determining a target matrix based on the normal line, the tangent line and the secondary tangent line of the current surface; determining the adjusted normal information based on the target matrix and the normal map; based on the normal information and the environment parameter, a current rendering parameter is determined.
It should be noted that, when the target object includes a plurality of surfaces, the rendering parameter determination process for each surface is the same, and therefore, one of the surfaces may be taken as an example for description.
In this embodiment, after the normal, the tangent line and the sub-tangent line of the current surface are obtained, a corresponding matrix may be constructed based on the coordinate information of the normal, the tangent line and the sub-tangent line, and the matrix may be used as a target matrix. The normal map may be a map constructed in advance for characterizing normal information of each concave-convex surface of the target object. In an actual application process, the target object is rendered into the display interface, a target model corresponding to the target object may be first constructed, and a corresponding normal map is generated based on the target model, so as to render the target object based on the pixel values included in the normal map. In this embodiment, the normal map may be a map in which a normal is marked by RGB colors on each model surface. The normal map stores the normal direction of each model surface in the tangential space, i.e., the Z-axis is the normal direction of the model surface, the X-axis is the tangential direction of the model surface, and the Y-axis is the sub-tangential direction of the model surface.
In specific implementation, for each surface, a normal line, a tangent line and a sub-tangent line of the current surface can be determined, an object matrix is constructed based on the tangent line, the normal line and the sub-tangent line, meanwhile, a normal map corresponding to an object can be called, the object matrix and the normal map are compared, difference information between the object matrix and the normal map is extracted, so that adjusted normal information is obtained, and accordingly, rendering parameters of the current surface can be determined based on the normal information and environmental parameters; when the rendering parameters of each surface of the target object are obtained, the target object can be rendered into the target grid unit based on the rendering parameters, so that the display content of the target topographic map can be updated. The advantages of such an arrangement are: the rendering parameters of each surface of the target object can be accurately determined, so that the target object is rendered based on each rendering parameter and the environment parameter, and the rendered target object is close to the display effect in the real world.
According to the technical scheme of the embodiment of the disclosure, the target topographic map is generated based on the trigger operation in the target area, further, when the trigger operation on the target object is detected to update the display content in the target topographic map, the target grid unit corresponding to the trigger operation is determined, and finally, the target object is rendered on the target grid unit based on the object parameter of the target object and the environment parameter of the environment to update the display content.
Fig. 6 is a schematic flow chart of a data processing method provided by an embodiment of the present disclosure, and based on the foregoing embodiment, a target animation video corresponding to a target object may also be displayed in a target topographic map. The specific implementation manner can be referred to the technical scheme of the embodiment. The technical terms that are the same as or corresponding to the above embodiments are not repeated herein.
As shown in fig. 6, the method specifically includes the following steps:
and S210, generating a target topographic map based on the trigger operation in the target area.
And S220, when the trigger operation on the target object is detected to update the display content in the target topographic map, determining the target grid cell corresponding to the trigger operation.
And S230, rendering the target object to the target grid unit based on the object parameter of the target object and the environment parameter of the environment to which the target object belongs so as to update the display content.
S240, if the target animation video corresponding to the target object is displayed in the target topographic map, the target animation video is determined based on the predetermined animation map.
In this embodiment, the target animation video may be a video for representing an animation display effect of the target object at different time stamps. For example, when the target object is a tree, the corresponding target animation video may be a video including a growth process corresponding to the tree growing from a small seedling to a large sapling. The animation map may be an image containing rendering information corresponding to the target object. The map may include a plurality of pixel points, rows in the animation map may be used to represent corresponding video frames, columns in the animation map may be used to represent model vertices on a target model corresponding to a target object, and pixel values of the pixel points in the animation map may be used to represent spatial location information of the model vertices in the corresponding video frames.
In practical application, animation maps corresponding to target objects can be created in advance, so that when any target object in the target topographic maps meets animation playing conditions, the animation map corresponding to the target object can be called, and a target animation video corresponding to the target object is quickly generated based on the animation maps and displayed in the target topographic maps. Optionally, the animation map is determined based on a preset animation video. The preset animation video can be a preset video used for reflecting the expected animation effect of the target object without using the timestamp. The preset animation video comprises at least two preset video frames. For example, when the target object is a tree, the corresponding preset animation video may be a video frame constructed based on the generation information of the tree at each time point, for example, a video frame constructed by a generation process corresponding to a small sapling growing into a large sapling growing into a ginseng sky tree, as the preset video frame. The benefit of this arrangement is: the animation data of each preset video frame in the pre-animation video is stored into one animation map, so that the effect of quickly generating the target animation video corresponding to the target object can be realized on the premise of reducing the data storage amount.
The following can explain a specific creation process of the animation map.
Optionally, determining the animation map based on a preset animation video includes: for each preset video frame in the preset animation video, determining the display position information of the model vertex of each grid sub-model on the target model corresponding to the target object in the current preset video frame; determining the target line number of the current preset video frame in the animation map; and converting the display position information into corresponding pixel values to be filled into the animation map based on the target row number and the corresponding column number of the model vertex in the animation map.
It should be noted that, the total number of model vertices of the target model may be predetermined, and the model vertices may be sequentially arranged according to a preset sequence, and a corresponding serial number is set, at this time, the total number of columns of the animation map may correspond to the total number of model vertices of the target model, and the arrangement sequence of each column in the animation map may be determined according to the serial numbers corresponding to the model vertices.
In this embodiment, the target model may be a 3D model constructed based on the target object. The object model may be composed of at least one mesh submodel, and each mesh submodel may be composed of at least three vertices, which may be model vertices. In the actual application process, the number of model vertices of the target model corresponding to the target object may be considered to be consistent under each timestamp, and at this time, different model structures may be determined by adjusting the display position of each model vertex, so as to obtain the target model under different timestamps with the target object.
In this embodiment, the display position information may be spatial position coordinates of each model vertex in a corresponding predetermined video frame. The target line number may be a line in the animation map corresponding to the current preset video frame. Specifically, the playing position of the current preset video frame in the preset animation video may be determined, so as to determine that the current preset video frame corresponds to the number of lines in the animation map based on the playing position, that is, the number of target lines may be obtained.
In practical application, for each preset video frame, the spatial position coordinates of the model vertexes of each grid sub-model on the target model in the current preset video frame can be determined to obtain the display position information of each model vertex, then, the target line number of the current preset video frame in the animation chartlet is determined according to the playing position of the current preset video frame in the preset animation video, further, the column number corresponding to each model vertex in the animation chartlet is determined based on the serial number identification of each model vertex contained in the current preset video frame, further, at least one pixel point can be determined based on the target line number and each column number, then, each display position information is converted into a pixel value in a linear mapping mode, and each pixel value is filled into the corresponding pixel point, so that the animation chartlet can be obtained. For example, if the current preset video frame is the 3 rd frame in the preset animation video and the number of model vertices included in the frame for which display position information can be detected is 30, the target row may be the 5 th row and the column may be the corresponding column of the 30 vertices in the animation map for which display position information can be detected. The benefit of this arrangement is: the data storage capacity can be reduced, and therefore the response speed of the terminal device in the process of rendering the target animation video can be improved.
On the basis of the above embodiments, in order to render a target animation video more matched with a target object in a real scene, a model vertex color map and/or a model vertex normal map can be further included in the animation map. The model vertex color map may be a map representing color information of each model vertex. The model vertex normal map may be a map of normal information used to characterize each model vertex.
In practical application, when a user wants to display a target animation video corresponding to a target object in a target topographic map, the target object displayed in the target topographic map can be made to trigger a playing condition, and when any target object is detected to meet the animation playing condition, a cartoon map corresponding to the target object can be called from a map library so as to determine the target animation video corresponding to the target object based on the cartoon map and animation playing parameters corresponding to the target object. The animation playing parameter may include a frame number of the target animation video.
It should be noted that the frame number of the preset animation video is less than or equal to the frame number of the target animation video. When the frame number of the preset animation video is equal to the frame number of the target animation video frame, the pixel value of each line in the corresponding animation map can be directly read, so that the target object is rendered based on each pixel value, and the target animation video corresponding to the target object is obtained; when the frame number of the preset animation video is smaller than that of the target animation video, the generation process of the target animation video can be realized by inserting the video frames into two adjacent preset video frames. The embodiments of the disclosure are not described in detail herein. The advantages of such an arrangement are: the diversity of the generation modes of the target animation videos is enhanced, so that the target animation videos corresponding to the target objects can be generated quickly when different conditions are met.
According to the technical scheme of the embodiment of the disclosure, a target topographic map is generated based on a trigger operation in a target area, then when the trigger operation on a target object is detected to update display content in the target topographic map, a target grid unit corresponding to the trigger operation is determined, and the target object is rendered onto the target grid unit based on an object parameter of the target object and an environment parameter of a belonging environment to update the display content.
Fig. 7 is a schematic flow chart of a data processing method provided in the embodiment of the present disclosure, and based on the foregoing embodiment, the target topographic map may also be rotated to obtain a corresponding target rotation parameter, so as to adjust display information of the target topographic map based on the target rotation parameter. The specific implementation manner can be referred to the technical scheme of the embodiment. The technical terms that are the same as or corresponding to the above embodiments are not repeated herein.
As shown in fig. 7, the method specifically includes the following steps:
and S310, generating a target topographic map based on the trigger operation in the target area.
And S320, when the trigger operation on the target object is detected to be updating the display content in the target topographic map, determining the target grid cell corresponding to the trigger operation.
S330, rendering the target object to the target grid unit based on the object parameter of the target object and the environment parameter of the environment to which the target object belongs so as to update the display content.
And S340, when the rotation operation on the target topographic map is detected, determining a target rotation parameter corresponding to the rotation operation so as to adjust the display information of the target topographic map based on the target rotation parameter.
In this embodiment, a rotation trigger control may be set in advance in any area of the target topographic map, and when a trigger operation of the user on the rotation trigger control is detected, the target topographic map may be rotated. The target rotation parameter may be a parameter for characterizing the rotation of the target topography. Alternatively, the target rotation parameter may be a rotation angle or the like.
In practical applications, when a rotation operation on the target topographic map is detected, a rotation parameter of the target topographic map after the rotation operation may be determined, the rotation parameter may be compared with a rotation parameter of the target topographic map before the rotation operation to obtain a target rotation parameter, and further, display information in the target topographic map may be adjusted based on the target rotation parameter, so that each display information may be matched with the target topographic map after the rotation operation.
According to the technical scheme of the embodiment of the disclosure, the target topographic map is generated based on the trigger operation in the target area, then when the trigger operation on the target object is detected to update the display content in the target topographic map, the target grid unit corresponding to the trigger operation is determined, the target object is rendered onto the target grid unit based on the object parameter of the target object and the environment parameter of the environment to update the display content, further, when the rotation operation on the target topographic map is detected, the target rotation parameter corresponding to the rotation operation is determined, the display information of the target topographic map is adjusted based on the target rotation parameter, the interactivity between a user and the display interface is enhanced, meanwhile, the effect that the display content in the target topographic map is dynamically adjusted along with the rotation of the target topographic map is achieved, and further, the flexibility of the display content in the display interface is improved.
Fig. 8 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present disclosure, and as shown in fig. 8, the apparatus includes: a target topography map generation module 410, a target grid cell determination module 420, and a display content update module 430.
The target topographic map generating module 410 is configured to generate a target topographic map based on the triggering operation in the target area; wherein the target topography map comprises at least one grid cell;
a target grid cell determining module 420, configured to, when a trigger operation on a target object is detected to update display content in the target topographic map, determine a target grid cell corresponding to the trigger operation;
a display content updating module 430, configured to render the target object onto the target grid cell based on the object parameter of the target object and the environment parameter of the environment to which the target object belongs, so as to update the display content.
On the basis of the above technical solutions, the apparatus further includes: and an index value determination module.
And the index value determining module is used for determining the index value corresponding to each grid unit in the target topographic map based on a pre-established cubic coordinate system after the target topographic map is generated so as to determine the target grid unit corresponding to the trigger operation based on the index value.
On the basis of the technical scheme, the target object comprises at least one of trees, flowers and plants, floor tiles, water drops, users, houses, stones, birds and beasts.
On the basis of the above technical solutions, the displaying content includes updating a display height of at least one grid cell in the target topographic map or adding the target object in at least one hexagonal grid of the target topographic map.
On the basis of the above technical solutions, the target grid cell determining module 420 includes: a target spatial information determination unit and a target mesh unit determination unit.
The target space information determining unit is used for determining target space information according to the screen coordinate information corresponding to the trigger operation;
and the target grid unit determining unit is used for determining the target grid unit according to the index value corresponding to the target space information.
On the basis of the above technical solutions, the object parameters include a normal line, a tangent line and a secondary tangent line forming each surface of the target object, and correspondingly, the display content updating module 430 includes: a rendering parameter determination submodule and a target object rendering submodule.
The rendering parameter determining submodule is used for determining rendering parameters of all the surfaces according to the normal line, the tangent line and the secondary tangent line corresponding to all the surfaces of the target object and the environment parameters of the environment to which the target object belongs;
a target object rendering sub-module to render the target object onto the target grid cell based on the rendering parameters.
On the basis of the above technical solutions, the rendering parameter determination sub-module includes: the device comprises an object matrix determining unit, a normal information determining unit and a rendering parameter determining unit.
A target matrix determination unit for determining a target matrix for each surface based on the normal line, the tangent line and the secondary tangent line of the current surface;
a normal information determining unit, configured to determine normal information after adjustment based on the target matrix and the normal map;
a rendering parameter determining unit, configured to determine a rendering parameter of the current surface based on the normal information and the environment parameter;
wherein the environment parameters comprise illumination parameters and the rendering parameters comprise surface roughness and brightness parameters.
On the basis of the above technical solutions, the apparatus further includes: and a target animation video determination module.
And the target animation video determination module is used for determining the target animation video based on a predetermined animation map if the target animation video corresponding to the target object is displayed in the target topographic map.
On the basis of the technical schemes, the animation map is determined based on a preset animation video, and the frame number of the preset animation video is less than or equal to that of the target animation video.
On the basis of the technical schemes, the target animation video determining module comprises: a display position information determining unit, a target line number determining unit, and a display position information converting unit.
A display position information determining unit, configured to determine, for each preset video frame in the preset animation video, display position information of a model vertex of each mesh sub-model on a target model corresponding to the target object in a current preset video frame;
the target line number determining unit is used for determining the target line number of the current preset video frame in the animation map;
and the display position information conversion unit is used for converting the display position information into corresponding pixel values and filling the pixel values into the animation map based on the target line number and the corresponding column number of the model vertex in the animation map.
On the basis of the technical solutions, the animation map further includes a model vertex color map and/or a model vertex normal map.
On the basis of the above technical solutions, the apparatus further includes: and a target rotation parameter determination module.
And the target rotation parameter determining module is used for determining a target rotation parameter corresponding to the rotation operation when the rotation operation on the target topographic map is detected so as to adjust the display information of the target topographic map based on the target rotation parameter.
According to the technical scheme of the embodiment of the disclosure, the target topographic map is generated based on the trigger operation in the target area, further, when the trigger operation on the target object is detected to update the display content in the target topographic map, the target grid unit corresponding to the trigger operation is determined, and finally, the target object is rendered on the target grid unit based on the object parameter of the target object and the environment parameter of the environment to update the display content.
The data processing device provided by the embodiment of the disclosure can execute the data processing method provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
It should be noted that, the units and modules included in the apparatus are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the embodiments of the present disclosure.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. Referring now to fig. 9, a schematic diagram of an electronic device (e.g., the terminal device or the server in fig. 9) 500 suitable for implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 9, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An editing/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 9 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 501.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The electronic device provided by the embodiment of the present disclosure and the data processing method provided by the above embodiment belong to the same inventive concept, and technical details that are not described in detail in the embodiment can be referred to the above embodiment, and the embodiment has the same beneficial effects as the above embodiment.
The disclosed embodiments provide a computer storage medium on which a computer program is stored, which when executed by a processor implements the data processing method provided by the above-described embodiments.
It should be noted that the computer readable medium of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
generating a target topographic map based on the triggering operation in the target area; wherein the target topography map comprises at least one grid cell;
when the triggering operation of the target object is detected to be updating the display content in the target topographic map, determining a target grid cell corresponding to the triggering operation;
rendering the target object onto the target grid cell based on the object parameter of the target object and the environment parameter of the environment to which the target object belongs, so as to update the display content.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
generating a target topographic map based on the triggering operation in the target area; wherein the target topography map comprises at least one grid cell;
when the triggering operation of a target object is detected to update the display content in the target topographic map, determining a target grid unit corresponding to the triggering operation;
rendering the target object onto the target grid cell based on the object parameter of the target object and the environment parameter of the environment to which the target object belongs, so as to update the display content.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, [ example one ] there is provided a data processing method, the method comprising:
generating a target topographic map based on the triggering operation in the target area; wherein the target topography map comprises at least one grid cell;
when the triggering operation of the target object is detected to be updating the display content in the target topographic map, determining a target grid cell corresponding to the triggering operation;
rendering the target object onto the target grid cell based on the object parameter of the target object and the environment parameter of the environment to which the target object belongs, so as to update the display content.
According to one or more embodiments of the present disclosure [ example two ] there is provided a data processing method, which, after generating a target topography map, further comprises:
optionally, based on a pre-established cubic coordinate system, an index value corresponding to each grid cell in the target topographic map is determined, so as to determine the target grid cell corresponding to the trigger operation based on the index value.
According to one or more embodiments of the present disclosure, [ example three ] there is provided a data processing method, further comprising:
optionally, the target object includes at least one of trees, flowers, floor tiles, water drops, users, houses, rocks, birds, and beasts.
According to one or more embodiments of the present disclosure, [ example four ] there is provided a data processing method, further comprising:
optionally, the displaying content includes updating a display height of at least one grid cell in the target topographic map or adding the target object in at least one hexagonal grid of the target topographic map.
According to one or more embodiments of the present disclosure, [ example five ] there is provided a data processing method, further comprising:
optionally, determining target space information according to screen coordinate information corresponding to the trigger operation;
and determining the target grid unit according to the index value corresponding to the target space information.
According to one or more embodiments of the present disclosure, [ example six ] there is provided a data processing method, the object parameters including a normal line, a tangent line, and a sub-tangent line constituting respective faces of the target object, the method further including:
optionally, determining rendering parameters of each surface according to a normal line, a tangent line, a secondary tangent line corresponding to each surface of the target object, and an environment parameter of an environment to which the target object belongs;
rendering the target object onto the target grid cell based on the rendering parameters.
According to one or more embodiments of the present disclosure, [ example seven ] there is provided a data processing method, the method further comprising:
optionally, for each surface, determining a target matrix based on a normal line, a tangent line and a secondary tangent line of the current surface;
determining normal information after adjustment based on the target matrix and the normal map;
determining a rendering parameter of the current surface based on the normal information and the environment parameter;
wherein the environmental parameters comprise lighting parameters and the rendering parameters comprise surface roughness and brightness parameters.
According to one or more embodiments of the present disclosure [ example eight ] there is provided a data processing method, further comprising:
optionally, if a target animation video corresponding to the target object is displayed in the target topographic map, the target animation video is determined based on a predetermined animation map.
According to one or more embodiments of the present disclosure, [ example nine ] there is provided a data processing method, further comprising:
optionally, the animation map is determined based on a preset animation video, and the number of frames of the preset animation video is less than or equal to the number of frames of the target animation video.
According to one or more embodiments of the present disclosure, [ example ten ] there is provided a data processing method, the method further comprising:
optionally, for each preset video frame in the preset animation video, determining display position information of a model vertex of each mesh sub-model on the target model corresponding to the target object in the current preset video frame;
determining the number of target lines of the current preset video frame in the animation map;
and converting the display position information into corresponding pixel values to be filled in the animation map based on the target row number and the corresponding column number of the model vertex in the animation map.
According to one or more embodiments of the present disclosure, [ example eleven ] there is provided a data processing method, further comprising:
optionally, the animation map further includes a model vertex color map and/or a model vertex normal map.
According to one or more embodiments of the present disclosure, [ example twelve ] there is provided a data processing method, the method further comprising:
optionally, when a rotation operation on the target topographic map is detected, a target rotation parameter corresponding to the rotation operation is determined, so as to adjust the display information of the target topographic map based on the target rotation parameter.
According to one or more embodiments of the present disclosure, [ example thirteen ] there is provided a data processing apparatus, the apparatus comprising:
the target topographic map generating module is used for generating a target topographic map based on the triggering operation in the target area; wherein the target topography map comprises at least one grid cell;
the target grid cell determining module is used for determining a target grid cell corresponding to a trigger operation when the trigger operation on a target object is detected to update the display content in the target topographic map;
and the display content updating module is used for rendering the target object to the target grid unit based on the object parameter of the target object and the environment parameter of the environment to which the target object belongs so as to update the display content.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and the technical features disclosed in the present disclosure (but not limited to) having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (15)

1. A method of data processing, comprising:
generating a target topographic map based on the triggering operation in the target area; wherein the target topography map comprises at least one grid cell;
when the triggering operation of the target object is detected to be updating the display content in the target topographic map, determining a target grid cell corresponding to the triggering operation;
rendering the target object onto the target grid cell based on the object parameter of the target object and the environment parameter of the environment to which the target object belongs, so as to update the display content.
2. The method of claim 1, further comprising, after said generating the target topography map:
and determining an index value corresponding to each grid cell in the target topographic map based on a pre-established cubic coordinate system, so as to determine the target grid cell corresponding to the trigger operation based on the index value.
3. The method of claim 1, wherein the target object comprises at least one of a tree, a flower, a tile, a water droplet, a user, a house, a stone, a bird, and an beast.
4. The method of claim 1, wherein displaying the content comprises updating a display height of at least one grid cell in the target topography or adding the target object in at least one hexagonal grid of the target topography.
5. The method of claim 1, wherein determining the target grid cell corresponding to the triggering operation comprises:
determining target space information according to the screen coordinate information corresponding to the trigger operation;
and determining the target grid unit according to the index value corresponding to the target space information.
6. The method of claim 1, wherein the object parameters comprise a normal line, a tangent line and a sub-tangent line constituting each surface of the target object, and wherein the rendering the target object onto the target grid cell based on the object parameters of the target object and the environment parameters of the environment comprises:
determining rendering parameters of each surface according to a normal line, a tangent line and a secondary tangent line corresponding to each surface of the target object and environmental parameters of an environment to which the target object belongs;
rendering the target object onto the target grid cell based on the rendering parameters.
7. The method of claim 6, wherein determining the rendering parameters of each surface according to the normal, the tangent, the sub-tangent corresponding to each surface of the target object and the environment parameters of the environment to which the target object belongs comprises:
for each surface, determining a target matrix based on the normal line, the tangent line and the secondary tangent line of the current surface;
determining normal information after adjustment based on the target matrix and the normal map;
determining a rendering parameter of the current surface based on the normal information and the environment parameter;
wherein the environment parameters comprise illumination parameters and the rendering parameters comprise surface roughness and brightness parameters.
8. The method of claim 1, further comprising:
and if a target animation video corresponding to the target object is displayed in the target topographic map, determining the target animation video based on a predetermined animation chartlet.
9. The method of claim 8, wherein the animation map is determined based on a preset animation video having a number of frames less than or equal to a number of frames of the target animation video.
10. The method of claim 8, wherein determining the animation map based on a preset animation video comprises:
for each preset video frame in the preset animation video, determining the display position information of the model vertex of each grid sub-model on the target model corresponding to the target object in the current preset video frame;
determining the number of target lines of the current preset video frame in the animation map;
and converting the display position information into corresponding pixel values to be filled in the animation map based on the target row number and the corresponding column number of the model vertex in the animation map.
11. The method of claim 9, wherein the animation map further comprises a model vertex color map and/or a model vertex normal map.
12. The method of claim 1, further comprising:
when the rotation operation on the target topographic map is detected, determining a target rotation parameter corresponding to the rotation operation so as to adjust the display information of the target topographic map based on the target rotation parameter.
13. A data processing apparatus, characterized by comprising:
the target topographic map generating module is used for generating a target topographic map based on the triggering operation in the target area; wherein the target topography map comprises at least one grid cell;
the target grid cell determining module is used for determining a target grid cell corresponding to a trigger operation when the trigger operation on a target object is detected to update the display content in the target topographic map;
and the display content updating module is used for rendering the target object to the target grid unit based on the object parameters of the target object and the environment parameters of the environment to which the target object belongs so as to update the display content.
14. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device to store one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a data processing method as claimed in any one of claims 1-12.
15. A storage medium containing computer-executable instructions for performing the data processing method of any one of claims 1-12 when executed by a computer processor.
CN202211193764.8A 2022-09-28 2022-09-28 Data processing method and device, electronic equipment and storage medium Pending CN115512022A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211193764.8A CN115512022A (en) 2022-09-28 2022-09-28 Data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211193764.8A CN115512022A (en) 2022-09-28 2022-09-28 Data processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115512022A true CN115512022A (en) 2022-12-23

Family

ID=84508686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211193764.8A Pending CN115512022A (en) 2022-09-28 2022-09-28 Data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115512022A (en)

Similar Documents

Publication Publication Date Title
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
US20150187130A1 (en) Automatic Generation of 2.5D Extruded Polygons from Full 3D Models
CN101702245B (en) Extensible universal three-dimensional terrain simulation system
WO2023207963A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN114820990B (en) Digital twin-based river basin flood control visualization method and system
CN111899323B (en) Three-dimensional earth drawing method and device
US9679349B2 (en) Method for visualizing three-dimensional data
US10157498B2 (en) System and method for procedurally generated object distribution in regions of a three-dimensional virtual environment
CN114756937A (en) Visualization system and method based on UE4 engine and Cesium framework
CN113593027A (en) Three-dimensional avionics display control interface device
CN112435335A (en) Three-dimensional vector tile data generation method and system
CN111798554A (en) Rendering parameter determination method, device, equipment and storage medium
CN111784817A (en) Shadow display method and device, storage medium and electronic device
CN111433822B (en) Planet-scale localization of augmented reality content
CN108230430B (en) Cloud layer mask image processing method and device
US20230169736A1 (en) Planet-scale positioning of augmented reality content
CN115512022A (en) Data processing method and device, electronic equipment and storage medium
CN115588064A (en) Video generation method and device, electronic equipment and storage medium
CN114549761A (en) Real-scene three-dimensional model layered rendering optimization method and system based on distributed storage and storage medium
CN110827400B (en) Method and device for generating model of object in three-dimensional scene and terminal
CN106875480B (en) Method for organizing urban three-dimensional data
CN115690268A (en) Video generation method and device, electronic equipment and storage medium
CN113936084B (en) Generation method of target elements in virtual sky and related equipment
CN113592999B (en) Rendering method of virtual luminous body and related equipment
WO2024104017A1 (en) Map display method and apparatus, device, storage medium, and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination