CN117315114A - Scene data processing method, device, electronic equipment and storage medium - Google Patents

Scene data processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117315114A
CN117315114A CN202311248750.6A CN202311248750A CN117315114A CN 117315114 A CN117315114 A CN 117315114A CN 202311248750 A CN202311248750 A CN 202311248750A CN 117315114 A CN117315114 A CN 117315114A
Authority
CN
China
Prior art keywords
data
scene
object instance
block
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311248750.6A
Other languages
Chinese (zh)
Inventor
周小星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202311248750.6A priority Critical patent/CN117315114A/en
Publication of CN117315114A publication Critical patent/CN117315114A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Abstract

The application provides a scene data processing method, a device, electronic equipment and a storage medium, which are used for processing data in a virtual scene, wherein the method comprises the following steps: acquiring an object data structure, wherein the object data structure is used for indicating at least one object type and object data respectively corresponding to at least one object, the object type comprises at least one of a point object, a region object and a voxel object, and the object data is used for indicating the distribution range of the object; acquiring a scene data structure corresponding to a target scene, wherein the target scene is divided into N primary blocks, each primary block is divided into M secondary blocks, and the scene data structure is used for indicating at least one of object instance data in the target scene, primary block data in the target primary block in the target scene and secondary block data in the secondary block in the target primary block; and managing the target scene according to the scene data structure and the object data structure.

Description

Scene data processing method, device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of data processing, and more particularly, to a scene data processing method, apparatus, electronic device, and storage medium.
Background
In the related art, a physical scene is generally represented by a data structure of an array of vertices, and the vertex data is mainly used for describing visual image rendering information of the physical model, such as texture, illumination, shadow, and the like, and is inefficient in performing scene management based on the data structure.
Disclosure of Invention
The application provides a scene data processing method, a device, electronic equipment and a storage medium, which can realize efficient management of scenes.
In a first aspect, a scene data processing method is provided, for processing data in a virtual scene, the method including:
acquiring an object data structure, wherein the object data structure is used for indicating at least one object type and object data respectively corresponding to at least one object, the object type comprises at least one of a point object, a region object and a voxel object, and the object data is used for indicating the distribution range of the object; acquiring a scene data structure corresponding to a target scene, wherein the target scene is divided into N primary blocks, each primary block is divided into M secondary blocks, and the scene data structure is used for indicating at least one of object instance data in the target scene, primary block data in the target primary block in the target scene and secondary block data in the secondary block in the target primary block; and managing the target scene according to the scene data structure and the object data structure.
In a second aspect, there is provided a scene data processing method for processing data in a virtual scene, the method comprising:
acquiring an object instance data structure corresponding to a physical scene, wherein the object instance data structure comprises an object instance identifier, an object identifier of an object to which the object belongs and object instance data, which are respectively corresponding to at least one object instance in the physical scene, and the object instance data are used for describing position information and distribution range of the object instance;
processing the object instance data structure to obtain an object data structure, wherein the object data structure comprises at least one object type and object data corresponding to at least one object respectively, the object type comprises at least one of a point object, a region object and a voxel object, and the object data is used for indicating the distribution range of the object;
adding the at least one object instance to a target scene according to the object instance data structure;
performing secondary block division on the target scene, and determining a scene data structure corresponding to the target scene according to the distribution of object instances in the target scene after secondary block division, wherein the scene data structure is used for indicating at least one of object instance data in the target scene, primary block data in a target primary block in the target scene and secondary block data in a secondary block of the target primary block;
And carrying out coding processing on the scene data structure to obtain a scene file, and carrying out coding processing on the object data structure to obtain an object file.
In a third aspect, there is provided a scene data processing apparatus for processing data in a virtual scene, the apparatus comprising:
a first obtaining unit, configured to obtain an object data structure, where the object data structure is configured to indicate an object type and object data corresponding to at least one object, where the object type includes at least one of a point object, a region object, and a voxel object, and the object data is configured to indicate a distribution range of the object;
a second obtaining unit, configured to obtain a scene data structure corresponding to a target scene, where the target scene is divided into N primary partitions, each primary partition is divided into M secondary partitions, and the scene data structure is configured to indicate at least one of object instance data in the target scene, object instance data in a target primary partition in the target scene, and object instance data in a secondary partition in the target primary partition, where the target primary partition includes some or all of the N primary partitions;
And the scene management unit is used for managing the target scene according to the scene data structure and the object data structure.
In a fourth aspect, there is provided a scene data processing apparatus for processing data in a virtual scene, the apparatus comprising:
the system comprises an acquisition unit, a storage unit and a distribution unit, wherein the acquisition unit is used for acquiring an object instance data structure corresponding to a physical scene, the object instance data structure comprises an object instance identifier, an object identifier and object instance data of an object to which at least one object instance in the physical scene corresponds respectively, and the object instance data is used for describing the position information and the distribution range of the object instance;
the data processing unit is used for processing the object instance data structure to obtain an object data structure, wherein the object data structure comprises at least one object type and object data corresponding to at least one object respectively, the object type comprises at least one of a point object, a region object and a voxel object, and the object data is used for indicating the distribution range of the object;
an adding unit, configured to add the at least one object instance to a target scene according to the object instance data structure;
A scene management unit, configured to divide the target scene, and determine a scene data structure corresponding to the target scene according to a distribution of object instances in the divided target scene, where the scene data structure is configured to indicate at least one of object instance data in the target scene, primary partition data in a target primary partition in the target scene, and secondary partition data in a secondary partition of the target primary partition;
and the encoding unit is used for encoding the scene data structure and the object data structure.
In a fifth aspect, there is provided a scene data processing device comprising a communication bus, a processor, a communication interface and a memory, the processor, the communication interface and the memory being interconnected by the communication bus, wherein the memory is for storing program code, the processor being configured to invoke the program code to perform a method as in the first aspect described above.
In a sixth aspect, there is provided a scene data processing device comprising a communication bus, a processor, a communication interface and a memory, the processor, the communication interface and the memory being interconnected by the communication bus, wherein the memory is for storing program code, the processor being configured to invoke the program code to perform a method as in the first aspect described above.
In a seventh aspect, there is provided a computer storage medium storing a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of the first or second aspect as described above.
According to the scene data processing method, the device, the electronic equipment and the storage medium, three object types and corresponding object data structures are defined and used for representing the distribution range of different objects in a scene, secondary blocking is further carried out on the scene, a scene data structure used for describing the distribution of the number of objects in the scene with the secondary blocking is defined, the scene is managed according to the scene data structure and the object data structure, and the scene management efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a system architecture suitable for use in embodiments of the present application.
Fig. 2 is a schematic diagram of an application scenario suitable for use in embodiments of the present application.
Fig. 3 is a schematic diagram of object types provided by an embodiment of the present application.
Fig. 4 is a schematic diagram of a conversion relationship between object types according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a two-stage segmentation of a scene provided in an embodiment of the present application.
Fig. 6 is a schematic flowchart of a scene data processing method provided in an embodiment of the present application.
Fig. 7 is a schematic diagram of a scene data structure according to an embodiment of the present application.
Fig. 8 is a schematic diagram of association between a scene file and an object file according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of an object file according to an embodiment of the present application.
Fig. 10 is a schematic storage structure of object data in an object file according to an embodiment of the present application.
Fig. 11 is a schematic diagram of a storage structure of area object data according to an embodiment of the present application.
Fig. 12 is a schematic diagram of a storage structure of another area object data according to an embodiment of the present application.
Fig. 13 is a schematic diagram of a storage structure of voxel object data according to an embodiment of the present application.
Fig. 14 is a schematic structural diagram of a scene file according to an embodiment of the present application.
Fig. 15 is a schematic hierarchical diagram of a target scene according to an embodiment of the present application.
Fig. 16 is a schematic block diagram of a target scene according to an embodiment of the present application.
Fig. 17 is a schematic diagram of a storage structure of one-level block data according to an embodiment of the present application.
Fig. 18 is a schematic diagram of a storage structure of two-level data according to an embodiment of the present application.
Fig. 19 is a schematic flowchart of another scenario data processing method provided in an embodiment of the present application.
Fig. 20 is a schematic flowchart of determining an offset of blocking data of a first level of blocking in blocking data information according to an embodiment of the present application.
Fig. 21 is a schematic diagram of a scene data processing apparatus provided in an embodiment of the present application.
Fig. 22 is a schematic diagram of another scene data processing apparatus provided in an embodiment of the present application.
Fig. 23 is a schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden for the embodiments herein, are intended to be within the scope of the present application.
To facilitate an understanding of the embodiments of the present application, the symbols used in the present application are described.
1. Variable definition =: variable types are defined.
2. Data assignment=. The data of the variable instance is modified.
3. Brackets { }:
3.1 structural definition: the variables within { } are members of different types. If A = { X, Y, Z }, A is a structure including three different types of data of X, Y, Z.
3.2 structural member Access: specific members may be accessed by { }, such as A { X }, referring to accessing data corresponding to X in the A structure.
3.3 data set: data sets that can be used to represent the same structural type, e.g., { A (k) }, represent sets { A (0), … A (1), …, A (k), … }
4. Brackets ():
4.1 multiple arrays: a plurality of data arrays of the same type are defined. If position= (x, y, z), three values of x, y, z are assigned to Position.
4.2 structure array Access: some element that can be used to represent a structure array, such as definition structure type a = { x, y, z }, a (0) represents the 0 th element of access structure array a, element x.
5. Array access [ ]: the elements of the array corresponding sequence numbers are accessed through the array index. If the array B is defined as = (1, 2, 3), B [0] is the 0 th element of the access B array.
In computer graphics, a physical model is typically represented by a series of 3D coordinates (commonly referred to as vertices). These 3D coordinates may define the shape and structure of the object. The following are some common terms involved in the data structure used to describe an object:
(1) Vertex (vertetics): these are the basis of the 3D model, which can be imagined as points in space.
(2) Edge (Edges): an edge is a connecting line between two vertices. A large number of edges may form complex shapes.
(3) Face (Faces): a face is a closed polygon formed by three or more boundary vertices. In computer graphics, triangles are typically used for representation.
(4) Vertex Data (Vertex Data) includes information such as the spatial position (X, Y, Z coordinates) of the Vertex, color, normal direction (for illumination calculation), texture coordinates (for mapping), and the like.
(5) Model matrix: local spatial position and orientation of an object are described. The matrix is used to transform the object from model space to world space, which is the reference space for the final rendering.
(6) Mesh (Mesh): the mesh is a package containing 3D model vertex information and index information. Typically, vertex arrays, index arrays, texture information, and the like are included.
(7) Material (Material): texture defines the surface appearance of an object, including color, texture, gloss, etc.
(8) Texture map (Texture): texture is a technique used to add visual detail, typically in the form of 2D images.
(9) Bone, joint: the object model for animation may also contain bones and joints that determine how the model moves.
In the related art, a physical scene is generally represented by a data structure of an array of vertices, and the vertex data is mainly used for describing visual image rendering information of the physical model, such as texture, illumination, shadow, and the like, and is inefficient in performing scene management based on the data structure.
In view of this, the present application provides a data structure for describing location information and distribution ranges of objects, enabling efficient management of a scene, while enabling fast determination of whether there are object instances at specific locations in the scene, and high-speed lookup of object instances included within a certain range.
Fig. 1 is a schematic diagram of an application scenario of a scenario data processing method suitable for the embodiment of the present application, as shown in fig. 1, the method may be applied to a system architecture consisting of a terminal 110 and a server 120, wherein the terminal 110 and the server 120 are connected through a network,
in one possible implementation, the terminal 110 may be any electronic product that can interact with a user through one or more of a keyboard, a touchpad, a touch screen, a remote control, a voice interaction, or a handwriting device, such as a personal computer (Personal Computer, PC), a mobile phone, a smart phone, a personal digital assistant (Personal Digital Assistant, PDA), a wearable device, a palm top computer PPC (Pocket PC), a tablet computer, a smart car set, a smart television, a smart sound box, etc.
In one possible implementation, the server 120 may be a stand-alone physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery network (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms. The terminal may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc.
In some embodiments, the terminal 110 and the server 120 may be directly or indirectly connected through a wired or wireless manner, which is not limited herein.
The scene data processing method in the embodiment of the present application may be executed by the terminal 110, or may also be executed by the server 120, or may also be executed by both the server and the terminal. When executed by terminal 110, may be executed by a client installed on terminal 110.
It should be understood that the above-described terminal 110 and server 120 are only examples, and that other implementations of other terminals or servers that are currently available or may later become available are also included within the scope of the present application and are hereby incorporated by reference.
Fig. 2 is a schematic diagram of an application scenario of a scenario data processing apparatus (shown in a dashed box in the figure) provided in an embodiment of the present application, where the scenario data processing apparatus may include a scenario object editing module, an object data processing module, a scenario management module, and an encoding module.
In some embodiments, the scene object editing module is configured to output object instance data of at least one object instance in the physical scene, for example, select a target object instance in the physical scene, output point cloud data describing location information and a distribution range of the target object instance, or perform three-dimensional or two-dimensional gridding processing according to vertex data of the object instance model to obtain grid data, or if there is no suitable object instance or range, edit a region range by a volume or region drawing tool to obtain region data within the region range. That is, the scene object editing module may output three types of file formats: point cloud data, region data, and grid data.
In some embodiments, the object data processing module may process the object instance data output by the scene object editing module to obtain object data, where the object data is used to describe a distribution range of the object.
For example, the object data processing module initializes three object types defined in the embodiments of the present application according to the above three types of data, respectively: and obtaining object data of three types of objects by the point object, the area object and the voxel object. Wherein the point cloud data may be used to initialize point object data, region data is used to initialize region object data, and grid data is used to initialize voxel object data. In some embodiments, grid data may be used to describe the distribution range of voxel objects. Optionally, the grid data may be expanded to form information such as the weight, distance, etc. of the voxel object, i.e. the grid data may also be used to describe the weight, distance, etc. of the voxel object.
In other embodiments, the object data processing module may also uniformly initialize the data of the above type into voxel object data, for example, may perform gridding processing on the point cloud data or the region data, convert the point cloud data or the region data into grid data, and initialize the point cloud data into voxel object data based on the grid data. The three-dimensional region data may be processed by three-dimensional gridding, and the two-dimensional region data may be processed by two-dimensional gridding.
In some embodiments, the scene management module is configured to add the at least one object instance to the target scene according to the location information and the distribution range of the at least one object instance. Or, the object data (such as a distribution range) of at least one object output by the object data processing module is subjected to transformation processing (such as translation, scaling and rotation), and the transformed object instance is added into the target scene.
In some embodiments, the scene management module may further perform two-level block management on the target scene to obtain a scene data structure of the target scene. The scene data structure may be used to describe two levels of blocking information of the target scene, or, in other words, the distribution of object instances in the target scene, the first level of blocking, and the second level of blocking, e.g., the scene data structure is used to indicate object instances included in the target scene, object instances included in the first level of blocking, object instances included in the second level of blocking, and so on.
In some embodiments, the encoding module may be configured to encode the scene data structure of the target scene to obtain a scene file, and encode the object data output by the object data processing module to obtain the object file.
Before describing the scene data processing method of the embodiment of the present application, three types of objects, object instances, and scenes defined in the present application will be described first.
In the present embodiment, the following three different object types are defined:
1. a point object, an object representing a single point.
Point object definition: point= { ObjectID }, objectID is the unique identification of the object.
Each object instance of a point object, whose spatial extent is a point, has spatial position coordinates.
2. Area object: representing a limited continuous area range surrounded by a plurality of convex polygons (corresponding to three-dimensional area objects) or a plurality of line segments (corresponding to two-dimensional area objects).
Three-dimensional region object definition: region = { ObjectID, faces = { Face (k): = { Position (n): = (x, y, z) } }, where ObjectID is a unique identification of an object, faces is a set of multiple all Faces, faces (k) represents a kth convex polygon, each convex polygon is described by a line segment composed of several points, and Position (n) represents coordinates of an nth point in the convex polygon. Faces are planar convex polygons corresponding to two-dimensional space, and Faces have only one face element.
Two-dimensional region object definition: region = { ObjectID, lines = { Line (k): = { Position (n): = (x, y, z) } }, where ObjectID is a unique identification of an object, lines is a set of multiple Line segments, line (k) represents the kth Line segment, and each Line (k) may be represented by two endpoints of a Line segment.
The region object may generate multiple object instances through translational, telescoping, and rotational changes.
3. Voxel object: a voxel object formed by gridding a cuboid region containing an object.
Voxel object definition: volume = { object = (x, y, z), count = (x, y, z), volume = { volume (k) }, where object is the unique identification of the object, size is the Size of the object bounding box, count is the number of Voxel blocks included in the Voxel object, or the number of Voxel blocks included in the bounding box of the Voxel object, volume is the set of Voxel blocks included in the Voxel object, that is, the set of Voxel blocks included in the bounding box of the Voxel object. Corresponds to a two-dimensional space, represented as a planar grid.
Voxel objects can produce multiple object instances through translational, telescopic, and rotational changes.
Fig. 3 is a schematic diagram of three types of objects, it being understood that fig. 3 is merely an example of a three-dimensional region object and a three-dimensional voxel object, but the present application is not limited thereto.
It should be understood that the application is not limited to a specific application scenario of the scenario data processing method, and may be applied to a game scenario, for example, in which case the object may be a sound source object in the game scenario, for example, a point sound source object is a sound source object distributed in a discrete point form, such as an insect, a bird or the like. Regional sound source objects refer to sound source objects of limited regional distribution, such as rivers, farms, etc. Voxel sound source objects may refer to continuously distributed environmental background sound source objects, such as terrain, vegetation, etc.
In some embodiments of the present application, the three types of objects may have the conversion relationships shown in fig. 4, that is, both the point object and the region object may be converted into voxel objects through gridding. For point objects, it may be converted into an object with a single voxel block. For a region object, a bounding box of the region may be first calculated, and then the bounding box may be gridded into a plurality of voxel blocks, where the voxel blocks in the region range are voxel blocks of the region object.
In some embodiments, an object may create different object instances, with data information for the object being shared between the object instances. Wherein different object instances of an object have different transformation data with respect to the object, the transformation data describing the positional offset, the degree of scaling and the rotational variation of the object instance with respect to the object to which it belongs. For example, an object instance may be defined as: objectInst = { ObjectInstID, objectID, location, scale, rotation }, where ObjectInst ID is an object instance identifier, objectID is an object to which the object instance belongs, location is a spatial position of the object instance, or a position offset of the object instance relative to the object, scale is a scaling factor of the object instance relative to the object, and Rotation is a Rotation vector of the object instance relative to the object.
In embodiments of the present application, the ObjectID may implicitly indicate the object type. That is, the object to which the object instance belongs and the object type of the object to which the object instance belongs can be known from the ObjectID.
In the embodiment of the application, the scene including the object instance has a start position coordinate and a size, for example, a start position coordinate SceneStart = (x, y, z), a scene size scenestize = (x, y, z), wherein for SceneStart, x, y, z respectively denote start position coordinates on x-axis, y-axis and z-axis, and for scenestize, x, y, z respectively denote sizes on x-axis, y-axis and z-axis.
In this embodiment of the present application, a two-level blocking manner may be used to divide and manage a scene, for example, the scene may be first-level divided according to a preset first-level blocking size (e.g., blockSize = (BlockSizeX, blockSizeY, blockSizeZ)), so as to obtain N first-level blocks, where BlockSize represents the first-level blocking size, blockSizeX represents the first-level blocking size on the x-axis, blockSizeY represents the first-level blocking size on the y-axis, and BlockSizeZ represents the first-level blocking size on the z-axis.
Further, the secondary partition can be performed on each primary partition independently, for example, the secondary partition can be performed by adopting a power-equal division mode of 2 times in each dimension, optionally, the secondary partition can be performed based on a division, wherein the division is a division level of the secondary partition, and represents the number of 2 times of division in each direction in the primary partition, and each secondary partition index is defined as: subindex= (SubIndexX, subIndexY, subIndexZ), wherein SubIndex x represents the index of the secondary segment on the x-axis, subIndex y represents the index of the secondary segment on the y-axis, and SubIndex z represents the index of the secondary segment on the z-axis. Alternatively, the size of the sub-block division level division may be determined according to the calculation amount and the search accuracy, where the search accuracy requirements of different object types are different, and may be controlled by a threshold of the number of object instances included in one second-level block. For example, if the number of object instances included in the primary partition is greater than the number threshold, the secondary partitions may be partitioned, ensuring that the number of object instances included in each secondary partition is less than the number threshold; if the number of object instances included in a primary partition is less than the number threshold, the secondary partition may not be partitioned for the primary partition; if the number of object instances included in one secondary partition is greater than the number threshold after the secondary partition is divided, the secondary partition can be further divided, so that the number of object instances included in the next secondary partition is ensured to be smaller than the number threshold. Fig. 5 is a schematic diagram of a two-level division method with division=2 according to an embodiment of the present application.
In some embodiments, the primary partitions containing objects in the scene are represented and stored as far as possible. Scene data definition: scene = { Scenestart = (x, y, z), sceneSize = (w, h, d), blockSize = (BlockSizeX, blockSi zeY, blockSizeZ), blockMap = { Block = { Index, blockData }, where Index represents the primary chunk Index, and BlockData represents the chunk data included in the primary chunk.
The scene method provided in the present application will be described below with reference to fig. 6 to 20.
Fig. 6 is a schematic flowchart of a scenario data processing method 200 provided in an embodiment of the present application. The method 200 may be performed by an encoding end, which may be, for example, a scene data processing device as shown in fig. 2, which may be a terminal or a server as shown in fig. 1. The method 200 is described below in terms of a scene data processing device. The method 200 may be used to process data in a virtual scene.
As shown in fig. 6, the method 200 includes at least some of the following:
s210, obtaining an object instance data structure corresponding to the physical scene.
In some embodiments, the object instance data structure includes an object instance identification corresponding to each object instance in at least one object instance in the physical scene, an object identification of an object to which each object instance belongs, and object instance data for each object instance, the object instance data being used to describe location information and a distribution range of the object instance. That is, the object instance data in the object instance data structure is the true distribution data of the object instance. The at least one object instance may include some or all of the object instances in the physical scene.
Therefore, in the embodiment of the application, the object instance in the physical scene can be described by adopting a data structure of the object instance identifier, the object identifier (ObjectID) of the object to which the object instance belongs, and the object instance data, wherein the object instance data is used for describing the position information and the distribution range of the object instance, which is beneficial to realizing efficient management of the scene.
It should be understood that the present application is not limited to a specific physical scene, for example, may be a game scene, and the present application is not limited to a dimension of the physical scene, for example, may be two-dimensional or three-dimensional.
In some embodiments, the location information of the object instance may refer to a starting location coordinate of the object instance, and the distribution range of the object instance may refer to a distribution range of bounding boxes of the object instance, such as a width (denoted as w), a height (denoted as h), and a depth (denoted as d) of the bounding boxes.
In some embodiments, the target object instance may include at least one of a point object instance, a region object instance, and a voxel object instance.
In some embodiments, for a point object instance, the object instance data may be location coordinates of the point object instance.
In some embodiments, for a region object instance, where the region object instance is a three-dimensional region object instance, the object instance data may include information of a plurality of faces that make up the three-dimensional region object instance. Alternatively, each face may be represented by a point on the face, a normal vector of a plane, and a direction value of the plane-corresponding region, wherein the direction value of the plane-corresponding region is used to determine whether or not to lie within the plane, where the inside and outside are with respect to a protection box of the three-dimensional region object instance, or, in the case where the region object instance is a two-dimensional region object instance, the object instance data may include information of a plurality of line segments constituting the two-dimensional region object instance. Alternatively, each line segment may be represented by two endpoints of the line segment and a direction value of a corresponding region of the line segment, where the direction value of the corresponding region of the line segment is used to determine whether the line segment is located in the line segment, where the inside and outside are relative to a protection box of the two-dimensional region object instance.
Alternatively, the direction value of the corresponding area of the plane or the line segment may have three values (e.g., 0,1, 2), and the distribution is used to indicate that the direction value is located in the plane or the line segment, or is located outside the plane or the line segment, or is located on the plane or the line segment.
In some embodiments, for a voxel object instance, the object instance data may be X voxel block data, corresponding to Z voxel block indices, respectively, that are consecutive, where X is greater than or equal to Z, where X is a positive integer. For example, the X voxel block data may be one voxel block data, i.e. the Z voxel block indices correspond to the same voxel block data, or the X voxel block data comprise Z voxel block data, each voxel block index corresponding to an independent voxel block data. Optionally, the voxel block data comprises distribution data of voxel object instances, e.g. for determining a distribution range of the voxel object instances, e.g. using a first value (e.g. 0) and a second value (e.g. 1) to indicate whether the corresponding voxel block belongs to the voxel object instance, e.g. a voxel block having a value of 0, indicating not belonging, and a value of 1 indicating belonging, respectively. In other embodiments, the voxel block data may also include weight data of the voxel block, for example, using a floating point number to indicate that the greater the value, the greater the weight of the voxel block is indicated, for example, the voxel object is fog, the greater the value may be the fog concentration, and the greater the concentration is indicated, or the voxel block data may also be determined for other meanings according to the actual requirement, which is not limited in this application.
In some embodiments, the x voxel block data forms a three-dimensional array in the case where the voxel object instance is a three-dimensional voxel object instance, and the x voxel block data forms a two-dimensional array in the case where the voxel object instance is a two-dimensional voxel object instance.
In some embodiments, the scene data processing apparatus may select a target object instance in the physical scene, derive point cloud data describing a location distribution of the target object instance. Alternatively, the vertex data of the physical scene model may be subjected to meshing processing to obtain mesh data, or a target region in the physical scene may be selected (for example, selected by a volume or region drawing tool), and region data in the target region may be derived. Further, the scene data processing device may process the point cloud data to obtain point object instance data, process the mesh data to obtain voxel object instance data, and process the region data to obtain region object instance data.
In other embodiments, the scene data processing device may perform gridding processing on the point cloud data and the region data to obtain grid data, and then process the grid data to obtain voxel object instance data. The scene data processing device can uniformly convert the point cloud data and the area data into grid data and further uniformly convert the grid data and the area data into voxel object instance data, and the method is convenient for counting the object instance proportion contained in a certain range in the scene.
Optionally, the text format of the point object instance is as follows:
Type:Tree 2
Name:Tree01 1
Name:Tree02 2
ID:1 10
Location:10,10,…
……
Location:50,50,…
ID:2 5
Location:20,20,…
……
wherein, the Type represents a scene Type Name where the point object instance is located, the Name represents a point object instance Name, the ID represents an object identifier of an object to which the point object instance belongs, and the Location represents a position coordinate of the point object instance.
Optionally, the text format of the region object instance is as follows:
Name:River ID:20
Type:2D
Count:4
Point00:x00,y00,z00 Point01:x01,y01,z01 dir:1
…,….
…,….
Point30:x30,y30,z30 Point31:x31,y31,z31 dir:1
wherein Name represents the Name of the region object instance, ID represents the object identifier of the object to which the region object instance belongs, type represents whether the region object is two-dimensional or three-dimensional, count is the number of line segments included in the region object instance, point00 and Point01 are two endpoints of the first line segment of the region object instance, … …, point30 and Point31 are two endpoints of the fourth line segment of the region object instance, and dir represents the direction of the region corresponding to the line segment.
Optionally, the text format of the voxel object instance is as follows:
Type:Material 10
Name:Grass01 ID:0
Name:Grass02 ID:2
Name:Dirt01 ID:3
……
Name:Concrete01 ID:9
Min:0.0,0.0,0.0
Max:99.0,99.0,0.0
Scale:1.0,1.0,1.0
Count:100,100,1
0 2 2 2 1 1 1 3 4 5…
0 2 2 1 1 1 1 3 4 4…
…………
2 3 3 6 6 1 1 1 1 7…
3 3 3 6 0 1 1 7 7 7…
wherein, type represents the scene Type Name where the voxel object instance is located, name represents the voxel object instance Name, ID represents the object identification of the object to which the voxel object instance belongs, min represents the minimum coordinate of the voxel object instance distribution, max represents the maximum coordinate of the voxel object instance distribution, scale represents the precision of the voxel object instance distribution, and Count represents the number of samples of the voxel object distribution.
S220, processing the object instance data structure to obtain an object data structure;
the object data structure includes at least one object type and object data corresponding to the at least one object, wherein the type of the object is a point object, a region object or a voxel object, and the object data (ObjectData) is used for indicating a distribution range of the object. That is, the object data in the object data structure is distribution data of the object.
Thus, in embodiments of the present application, the object may be described in terms of a data structure of object types and object data compositions.
In some embodiments, the distribution range of the object may refer to the distribution range of the bounding box of the object, such as the width (denoted w), height (denoted h), and depth (denoted d) of the bounding box.
In some embodiments, the scene data processing device may determine point object data from the point object instance data, region object data from the region object instance data, and voxel object data from the voxel object instance data. That is, point object instance data is used to initialize a point object, region object instance data is used to initialize a region object, and voxel object instance data is used to initialize a voxel object.
In some embodiments, the scene data processing apparatus may also uniformly initialize the point object instance data, the region object instance number, and the voxel object instance data to voxel object data. For example, the point object instance data and the region object instance data are subjected to gridding processing to obtain voxel object data.
In some embodiments, for a point object, the object data may be null, i.e., the position coordinates of the default point object are the origin of coordinates.
In some embodiments, for a region object, where the region object is a three-dimensional region object, the object data may include information of a plurality of faces constituting the three-dimensional region object. Alternatively, each plane may be represented by a point on the plane, the normal vector of the plane, and the direction value of the area corresponding to the plane. Alternatively, in the case where the area object is a two-dimensional area object, the object data may include information of a plurality of line segments constituting the two-dimensional area object. Alternatively, each line segment may be represented by two endpoints of the line segment and a direction value of a corresponding region of the line segment.
In some embodiments, for a voxel object, the object data may be X voxel block data corresponding to a Z voxel block index, e.g., the X voxel block data may be one voxel block data, i.e., the Z voxel block indices correspond to the same voxel block data, or the X voxel block data may include Z voxel block data, each voxel block index corresponding to an independent voxel block data. In some implementations, the voxel block data includes distribution data of the voxel object for indicating a distribution range of the voxel object, e.g., using a first value (e.g., 0) and a second value (e.g., 1) to indicate whether the corresponding voxel block belongs to the voxel object, e.g., a voxel block having a value of 0, indicating not belonging, and a value of 1 indicating belonging, respectively. In other implementations, the voxel block data may also include weight data of the voxel block, for example, using a floating point number to indicate that the greater the value, the greater the weight of the voxel block is indicated, or the voxel block data may also include data of other meanings of the voxel block, which is determined according to the actual requirement, and the application is not limited in this regard.
In some embodiments, the X voxel block data forms a three-dimensional array in the case where the voxel object is a three-dimensional voxel object, and the X voxel block data forms a two-dimensional array in the case where the voxel object is a two-dimensional voxel object.
S230, adding at least one object instance to the target scene according to the object instance data structure or the object data structure.
The target scene may be regarded as a management scene corresponding to the physical scene, and when the object instance in the physical scene needs to be managed, management may be performed based on the target scene.
In some embodiments, the target scene has a starting position coordinate and a size, and the scene data processing apparatus may add the object instance in the object instance data structure to the target scene according to the starting position coordinate and the size of the target scene and the position information and the distribution range of each object instance in the object instance data structure, or the scene data processing apparatus may transform the object data in the object data structure and add the transformed (e.g., translated, scaled, rotated) object instance to the target scene.
S240, performing two-level block management on the target scene, and determining a scene data structure corresponding to the target scene according to the distribution of object instances in the target scene after the two-level block.
For example, the target scene is first-level divided according to a first-level block size (noted as BlockSize = (BlockSizeX, blockSizeY, blockSizeZ)), where BlockSize represents a first-level block size, blockSizeX represents a first-level block size on the x-axis, blockSizeY represents a first-level block size on the y-axis, and BlockSizeX represents a first-level block size on the z-axis. Further, the secondary partition may be performed independently for each primary partition, for example, the primary partition may be further divided into secondary partitions according to a partition level Divde of the secondary partition, for example, the primary partition may be divided into M secondary partitions. The specific division manner refers to the related description of the foregoing embodiments, and for brevity, the description is omitted here.
In some embodiments, each primary chunk in the target scene corresponds to a primary chunk index, denoted index, and each secondary chunk corresponds to a secondary chunk index, denoted subadex.
In some embodiments, the scene data structure corresponding to the target scene is used to store at least one of the following information:
the object instance table is corresponding to the target scene and is used for indicating information of object instances included in the target scene;
Primary partition data corresponding to each primary partition in the target scene, for example, an object instance array (noted as objectlnstroarray) corresponding to each primary partition in the target primary partition, for describing information of object instances included in the primary partition;
the second-level partition data corresponding to the second-level partition in the target first-level partition is used for describing information of object examples included in the second-level partition, for example, a sequence number array (SeqArray) corresponding to the second-level partition in each first-level partition is used for indicating a sequence number of the object example data included in the second-level partition in the object example array corresponding to the first-level partition.
It should be understood that the storage formats such as the table and the array used for the object instance information are merely examples, but the application is not limited thereto, and other equivalent modification manners fall within the protection scope of the application, for example, the object instance array corresponding to the first-level partition may also be stored in a table manner, the object instance table may also be stored in an array manner, and the sequence number array may also explicitly indicate the information of the object instance included in the second-level partition.
In some embodiments, the object instance table includes at least one of the following information:
Object identification (ObjectID) of an object to which each object instance included in the target scene belongs, object instance data of each object instance, the object instance data being transformation data of the object instance with respect to the object to which the object belongs, for example, including at least one of a start position coordinate of the object instance (or, a start position offset of the object instance with respect to the object to which the object belongs), a scaling factor of each object instance with respect to the object to which the object belongs, and a rotation vector of said each object instance with respect to the object to which the object belongs. Optionally, the object instance table may also include an object instance identifier corresponding to the object instance.
That is, the object instance table stores the transformation data of the object instance relative to the object to which the object instance belongs, and by adopting the storage mode, the real distribution data of each object instance can be obtained by only storing the transformation data of each object instance relative to the object to which the object instance belongs and further combining the object data without storing the real distribution data of each object instance, so that the storage cost can be reduced.
In some embodiments, the target primary partition is a primary partition in the target scene that includes object instance data, i.e., only the primary hierarchy of object instance data is represented and stored.
In some embodiments, the object instance array corresponding to the primary partition includes the following information: the level one partition includes an object instance identification (ObjectInstID) corresponding to each of at least one object instance, and object instance data (ObjectInstData) corresponding to each object instance.
In some embodiments, the object instance identifications in the object instance array point to a set of object instance information (including object identifications, transformation data) in an object instance table. The object instance identification may indicate a sequence number of the set of object instance information in the object instance table, e.g., object instance identification 1 indicates a first set of object instance information in the pointing object instance table and object instance identification 2 indicates a second set of object instance information in the pointing object instance table.
In some embodiments, in the object instance array corresponding to the first level partition, the object instance data is null for the corresponding point object instance.
In some embodiments, in the object instance array corresponding to the first level partition, for the area object instance, the object instance data is used to indicate data of the area object instance located in the first level partition, if at least one face or line segment of the area object instance is located in the second level partition of the first level partition or the first level partition, at least one set of object instance data may be included in the object instance array corresponding to the first level partition, where the at least one set of object instance data corresponds to the same area object instance identifier, and each set of object instance data includes one area object instance identifier and one face index or line segment index, where the one face or line segment representing the area object instance is in the first level partition.
In some embodiments, in the object instance array corresponding to the first level partition, for a voxel object instance, if at least one voxel block of the voxel object instance is located in the first level partition or the second level partition of the first level partition, the following manner may be adopted for representing:
mode 1: the object instance array corresponding to the first-level block comprises a group of object instance data aiming at the voxel object instance, and specifically comprises a voxel object instance identifier and a start index of the at least one voxel block;
mode 2: the object instance array corresponding to the first level block includes Y sets of object instance data for the voxel object instance, and Z voxel blocks respectively corresponding to the voxel object instance, where each set of object instance data includes a voxel object instance identifier and voxel block data, and the voxel block data may include distribution data of the voxel object instance, for example, for indicating whether the voxel block for which Y is less than or equal to Z and Y, Z is a positive integer. Alternatively, in the case that the voxel block data corresponding to the Z voxel block are the same, the Y-group object instance data may be a group of object instance data, or in the case that the voxel block data corresponding to the Z voxel block are different, the Y-group object instance data may include Z-group object instance data, which are in one-to-one correspondence with the Z voxel blocks.
In the embodiment of the present application, a set of object instance data may be considered to correspond to a sequence number, i.e. a sequence number by which a set of object instance data may be referenced.
In some embodiments, the sequence number array corresponding to the second level partition is used to indicate at least one instance sequence number (denoted as Seq), where each instance sequence number is used to indicate a sequence number of object instance data included in the second level partition in the object instance array corresponding to the first level partition. For example, the sequence number array includes an instance sequence number 1, indicating that the secondary partition includes a first set of object instance data in the object instance array corresponding to the primary partition. For another example, the sequence number array includes instance sequence numbers 1 and 2, indicating that the second level partition includes a first set of object instance data and a second set of object instance data in the object instance array corresponding to the level partition.
In some embodiments, when representing voxel object instance data in a primary partition in mode 1, the sequence number arrays corresponding to the secondary partition may indicate the same sequence number, i.e., the sequence numbers corresponding to the starting index. Alternatively, the sequence number array corresponding to the second level of the first level of the blocks may not be indicated in the scene data structure and the scene file. In this case, the object file needs to include object data corresponding to the voxel object to which the object file belongs, where the object data includes distribution data of the voxel object, and which voxel block is specifically included in the second-level partition block may be determined according to the voxel block corresponding to the StartIndex and the object data (e.g., voxel block size, distribution data) of the voxel object.
In some embodiments, when representing voxel object instance data in a primary partition in manner 2, a sequence number array corresponding to a secondary partition may point to one or more sequence numbers in the object instance array, and the voxel block data indicated by the sequence numbers may determine whether the voxel block is included in the secondary partition. In this case, the distribution data of the voxel object may not be stored in the object file, alternatively, other data of the voxel object, such as weight data, may be stored in the object file.
Fig. 7 is a schematic diagram of a scene data structure provided in an embodiment of the present application. As shown in fig. 7, the scene data structure may include the following information:
and the object instance table corresponding to the target scene is used for indicating information of at least one object instance included in the target scene, for example, the information comprises object identification of an object to which each object instance belongs, and transformation data of each object instance relative to the object to which each object instance belongs. Each object instance points to the affiliated object through the object identifier, wherein the real distribution data corresponding to each object instance can be determined according to the object data of the affiliated object and the transformation data of the object instance relative to the affiliated object;
An object instance array (i.e., objectlnstroray) corresponding to each of the target primary partitions in the target scene, wherein the object instance array is used for indicating information of object instances included in the primary partitions, and an object instance identifier in the object instance array points to one object instance in the object instance table;
and a sequence number array (namely SeqArray) corresponding to the second-level block in the first-level block, wherein the sequence number array points to a member in the object instance array, and the sequence number array can comprise at least one instance sequence number (Seq) which represents the sequence number of the object instance data included in the second-level block in the object instance array.
In some embodiments, the scene data processing device may determine a first-level partition to which the object instance belongs according to the location information and the distribution range of the object instance, and further may determine a second-level partition to which the object instance belongs in the first-level partition according to the classification level of the second-level partition, so as to obtain the distribution of the object instance in the first-level partition and the second-level partition, thereby obtaining a scene data structure for describing the distribution of the object instance in the target scene, the first-level partition and the second-level partition. It should be understood that in the embodiment of the present application, the number of primary partitions to which one object instance belongs may be one, that is, one object instance may be in one or more primary partitions, or may be multiple, and the number of secondary partitions to which one object instance belongs may be one, or may be multiple, that is, one object instance may be in one or more secondary partitions, which is not limited in the present application.
In some embodiments, for a point object instance, the scene data processing apparatus may determine a first-level partition (denoted as a first-level partition n) where the point object instance is located according to the position coordinates of the point object instance, and determine a second-level partition (denoted as a second-level partition m) where the point object instance is located according to the partition level of the second-level partition, where the point object instance may be allocated to the second-level partition m in the first-level partition n, further, the point object instance may be added in an object instance array corresponding to the first-level partition n, for example, where the sequence number of the point object instance in the object instance array is 1, i.e., the point object instance is the first object instance in the object instance array, where the sequence number 1 (i.e., seq=1) is included in the sequence number array corresponding to the second-level partition m is to be set, i.e., where the second-level partition m includes the first object instance in the object instance array corresponding to the first-level partition n may be expressed as:
blockdata= { Index, objectArray = { pointobjectlnsitid }, devide, { SubIndex, seq=1 }. Where ObjectArray represents an object instance array corresponding to the primary partition n, pointObjectInstID represents a point object instance identifier, and seq=1 represents that the object instance included in the secondary partition m corresponding to SubIndex is the first object instance in the object instance array. In some embodiments, the partition level of the second-level partition is a number threshold of object instances included in one second-level partition, and the scene data processing apparatus may control the division of the second-level partition through the number threshold, so as to ensure that the number of object instances in one second-level partition does not exceed the threshold, and facilitate quick search of object instances in a subsequent area. Optionally, the scene data processing device may further divide a secondary partition when the number of object instances included in the secondary partition is greater than the threshold.
In some embodiments, for a region object instance, the scene data processing apparatus may determine, according to positions and distribution ranges of a plurality of faces or a plurality of line segments of the region object instance, an overlapping condition of a first-level block in the target scene with each face or each line segment, if the first-level block overlaps with at least one face or at least one line segment of the region object instance, the first-level block may be considered to include at least one face or at least one line segment of the region object instance, further, may divide the second-level block according to a block level of the second-level block, calculate an overlapping condition of each second-level block with each face or each line segment, and if the first-level block overlaps with at least one face or at least one line segment, consider the second-level block to include the at least one face or at least one line segment of the region object instance. The method comprises the steps of determining a first-level partition and a second-level partition of each face or each line segment of the regional object instance, distributing each face or each line segment of the regional object instance into the corresponding first-level partition and second-level partition, and updating object instance data of the corresponding first-level partition and sequence number arrays of the second-level partition. If a primary partition includes k faces of an instance of a region object, face k being located in a first secondary partition, face k-1 being located in a second secondary partition, the partition data of the primary partition may be expressed as:
Blockdata= { Index, objectarray= { { { region objectlnsitid, { Face (k) }, { RegionObjectIns tID, { Face (k-1) }, … }, division, { SubIndex0, seq=1 }, { SubIndex1, seq=2, … }. Wherein ObjectArray represents an object instance array corresponding to the first-level partition, regionObjectInstrod represents a region object instance identifier, seq=1 represents a first object instance in the object instance array, i.e. a face k of the region object instance, in the second-level partition corresponding to subIndex0, and seq=2 represents a second object instance in the object instance array, i.e. a face k-1 of the region object instance, in the second-level partition corresponding to subIndex 1.
In some embodiments, for a voxel object instance, the scene data processing device may determine, according to a distribution range of the voxel object instance, an overlapping condition of a first-level partition in the target scene and each voxel block of the voxel object instance, if the first-level partition overlaps with at least one voxel block of the voxel object instance, the first-level partition may be considered to include at least one voxel block of the voxel object, further, according to a partition level of the second-level partition, an overlapping condition of each second-level partition and each voxel block may be calculated, if the second-level partition overlaps with at least one voxel block of the voxel object instance, the second-level partition may be considered to include the at least one voxel block of the voxel object instance, further, according to a distribution of each voxel block of the voxel object instance in the first-level partition and the second-level partition, voxel block data in each first-level partition and second-level partition may be determined, and an object instance data array of the corresponding first-level partition and second-level partition may be updated. Alternatively, when performing the secondary segmentation, the segmented secondary segmentation block and the voxel block may be aligned, so as to facilitate the search of the voxel object. In some embodiments, the blocking data comprising a first level of blocking of voxel object instances may be represented as:
Blockdata= { Index, objectarray= { VoxelObjectInstID, startIndex }, division, { SubIndex0, seq=1 }, … { SubIndex n, seq=1 }. Where ObjectArray represents an object instance array corresponding to a primary partition, voxelObjectInstID represents a voxel object instance identifier, in this case, the secondary partitions corresponding to SubIndex0, …, subIndex n all point to the same sequence number, i.e. seq=1.
In some embodiments of the present application, the scene data processing device may delete an object instance in a target scene according to a scene data structure corresponding to the target scene, for example, when deleting an object instance, may delete an object identifier corresponding to an object to which the object instance belongs in an object instance table, or set an object identifier corresponding to an object to which the object instance belongs to an invalid value, so when querying an object to which the object instance belongs, lack of a reference of the object may determine that the object instance is deleted.
In some embodiments, the scene data processing apparatus may also query the object instance in the target scene according to the scene data structure corresponding to the target scene. For example, the method 200 further comprises:
acquiring a range to be queried;
Determining a second level block in the target scene according to the range to be queried, wherein the second level block comprises at least one level block, and the second level block comprises at least one level block;
acquiring a target object instance and object instance data corresponding to the target object instance, which are included in the second-level partition according to a scene data structure corresponding to the target scene, for example, determining the target object instance and object instance data (namely, transformation data of the target object instance relative to the affiliated object) included in the second-level partition according to an object instance array corresponding to the second-level partition, a sequence number array corresponding to the second-level partition and an object instance table;
acquiring object data of an object to which the target object instance belongs according to the object data structure;
determining distribution data of the target object instance according to object instance data corresponding to the target object instance and object data of an object to which the target object instance belongs, wherein the distribution data is used for describing position information and a distribution range of the target object instance;
and determining whether the range to be queried comprises the target object instance according to the distribution data of the target object instance.
In some embodiments, the range to be queried is a point coordinate position= (x, y, z), the scene data processing apparatus may query the point object instance included in the point coordinate. For example, the scene data processing apparatus may first determine a first-level partition in which the point coordinate is located, further determine a second-level partition in which the point coordinate is located according to a partition level of the second-level partition, then acquire a sequence number array corresponding to the second-level partition, acquire an object instance pointed to by the sequence number array according to an object instance array corresponding to the second-level partition, then acquire an object identifier of an object to which the object instance belongs and object instance data (i.e., transform data) from an object instance table, acquire object data of an object to which the object instance belongs from an object data structure according to the object identifier of the object to which the object instance belongs, and then determine real distribution data of the object instance according to the object instance data (i.e., transform data) of the object instance and the object data of the object to which the object belongs, thereby determining whether the point coordinate includes the object instance. For example, if the actual distribution data of the object instance is the same as the point coordinates, then it is determined that the point coordinates include the object instance, otherwise it is determined that the point coordinates do not include the object instance.
In some embodiments, the scene data processing apparatus may also query the region type corresponding to the point coordinates according to the scene data structure corresponding to the target scene.
For example, the scene data processing apparatus may first determine a first-level partition where the point coordinate is located, further determine a second-level partition where the point coordinate is located according to a partition level of the second-level partition, then obtain a sequence number array corresponding to the second-level partition, obtain an object instance pointed to by the sequence number array according to an object instance array corresponding to the first-level partition, then obtain an object identifier of an object to which the object instance belongs and object instance data (i.e. transformation data) from an object instance table, obtain object data of an object to which the object instance belongs from an object data structure according to the object identifier of the object to which the object instance belongs, then determine real distribution data of the object instance according to the object instance data (i.e. transformation data) of the object instance and object data of the object to which the object belongs, then perform region judgment according to the real distribution data of the object instance, and determine a region where the point coordinate is located. It should be understood that the algorithm used for determining the area is not limited in this application.
In some embodiments, the object to be queried may be a sphere with a radius R, where the sphere is a point coordinate position= (x, y, z), and if the point object instance or the voxel object instance included in the sphere is to be determined, the scene data processing apparatus may first determine, according to determining a first-level partition where the point coordinate is located, then determine other first-level partitions included in the sphere with a radius R, obtain a first-level partition set, obtain, from the scene data structure, a sequence number array corresponding to a second-level partition in each first-level partition in the first-level partition set, thereby determining an object instance included in each second-level partition, and count the object instance included in each second-level partition, and determine a point object instance or an area object instance included in the sphere.
S250, performing coding processing on the scene data structure corresponding to the target scene to obtain a scene file, and performing coding processing on the object data structure to obtain an object file.
Further, the scene file and the object file are sent to a decoding end.
It should be understood that the Encoding algorithm adopted in the Encoding of the scene data structure and the object data structure is not particularly limited, for example, for the sequence number array corresponding to the second level of the division and { Voxel (k) } data corresponding to the Voxel object, which are mainly repetitive data, the redundancy is very high, and lossless compression algorithms such as Run-Length Encoding (RLE), huffman (Huffman) or arithmetic coding may be adopted for compression Encoding.
In some embodiments, the scene data and the object data may be stored independently, and the object data may be shared by different scenes, as shown in fig. 8, so that the real distribution data of each object instance may not be stored in the scene data of different scenes, only the transformation data of the object instance relative to the object is stored, and the real distribution data of the object instance may be obtained by further querying the object file and combining the transformation data, so that the storage cost may be reduced.
Hereinafter, the contents of the object file are described in connection with the specific embodiment.
In some embodiments, the object file includes description information of at least one object and object data of each of the at least one object, the object data being distribution data of the object, for example, a distribution range for describing the object, specifically, for example, a distribution range of a bounding box of the object, optionally, the object data may further include location information of the object, for example, a start location coordinate. The at least one object comprises at least one type of object selected from the group consisting of: a point object, a region object or a voxel object. Fig. 9 is a schematic structural diagram of an object file according to an embodiment of the present application.
In some embodiments, the descriptive information includes at least one of the following information:
the object identification, object type, object data start offset for each of the at least one object is used to indicate the start offset of the object data in the object file. It can be understood that: the object file includes an index table of objects, the index is ObjectID, and the index value is the object type and the initial offset of the object data.
In some embodiments, the object data may include distribution data of the object, for example, to describe a distribution range of the object.
Optionally, user data of each object may also be stored in the object file to support expansion of the object data by the user.
Fig. 10 is a schematic diagram of a storage structure of object data according to an embodiment of the present application. The method includes determining a starting position of object data according to an offset of the object data, reading an object data length field from the starting position, obtaining an object data length L1, reading data of the length L1 after the object data length field as the object data, optionally, further including a user data length field after the object data is read for indicating a user data length L2, and reading data of the length L2 after the user data length field as the user data.
The following describes the representation of object data in combination with different object types.
In some embodiments, if the object type is a point object, the object data of the point object is null, and the starting position coordinates of the default point object may be considered as the origin of coordinates.
In some embodiments, if the object type is a region object, in the case that the region object is a three-dimensional region object (denoted as type 1), the object data of the three-dimensional region object includes a position of one point on each of a plurality of surfaces constituting the region object, a normal vector of each surface, and a direction of each corresponding region (for determining whether or not it is located within a region, or in other words, whether or not it is located within a bounding box of the three-dimensional region object), as shown in fig. 11. In the case where the area object is a two-dimensional area object (denoted as type 0), the object data of the two-dimensional area object includes the position of one point on each of a plurality of line segments constituting the area object and the direction of the area corresponding to each line segment (for determining whether or not it is located within the area, or in other words, within the bounding box of the three-dimensional area object), as shown in fig. 12.
For a three-dimensional region object, it is defined as: region = { ObjectID,
faces = { Face (k) = { Position (n) = (x, y, z) } }, object data is a set of Faces constituting a three-dimensional area object, and can be expressed by points (x, y, z) in a plane and plane normal vectors (u, v, w) for each Face (k). Further, in order to facilitate the region judgment, the direction value dir of the surface corresponding region may be increased.
For a two-dimensional region object, it is defined as: region = { ObjectID, lines = { Line
(k) The object data is a set Lines of Line segments that make up a two-dimensional area object, and for each Line (k) can be represented by two endpoints (x 1, y1, z 1) and (x 2, y2, z 2) of the Line segment. Further, in order to facilitate the region judgment, the direction value dir of the region corresponding to the line segment may be increased.
In some embodiments, if the object type is a voxel object, the object data of the voxel object includes at least one of a Size (Size) of the voxel object, a number X of voxel blocks, and X voxel block data, where the voxel block data may be one or may be plural, for example, the voxel block data may include distribution data of the voxel block, for example, indicating whether the voxel block belongs to the voxel object, or may include weight data of the voxel block, for example, indicating a weight of the voxel block.
Fig. 13 is a schematic diagram of a storage structure of object data of a voxel object according to an embodiment of the present application. Wherein the X voxel block data corresponds to a Z voxel block index, the Z voxel value index being consecutive. For example, the X voxel block data may be one voxel block data, i.e. the Z voxel block indices correspond to the same voxel block data, or the X voxel block data comprise Z voxel block data, each voxel block index corresponding to an independent voxel block data. It should be appreciated that fig. 13 illustrates only 16 voxel blocks as an example, but the application is not limited thereto, and that a voxel object may include other numbers of voxel blocks.
In some embodiments, voxel block data may be used to indicate whether the current voxel block belongs to a voxel object, e.g., a value of 0 indicates not belonging to a voxel object, a value of 1 indicates belonging to a voxel object, or may indicate the weight or degree of a voxel block, e.g., a value of x, which is a floating point number, the larger the value, the larger the weight.
In some embodiments of the present application, the scene file includes at least one of:
a file header for indicating description information of the target scene and offset of other parts in the scene file;
The object instance table is corresponding to the target scene and is used for indicating information of object instances included in the target scene;
the method comprises the steps of dividing block description information of a target scene, wherein the block description information is used for indicating index data of a first-level block in a scene file;
and the block data information of the target scene is used for indicating the block data of the first-level block in the target scene.
In some embodiments, the description information of the target scene is used to describe a starting position, a size, a first-level block size, a number of first-level blocks, a number of first-level block segments, and the like of the scene.
In some embodiments, as shown in fig. 14, the header of the scene file includes at least one of:
start position scenestart= (x, y, z) of target scene;
size SceneSize of target scene = (w, h, d);
size BlockSize of the first level block = (BlockSizeX, blockSizeY, blockSizeZ);
the number of the first-level blocks is BlockCount= (BlockCountX, blockCountY, blockCountZ), wherein BlockCountX represents the number of the first-level blocks on the X axis, blockCountY represents the number of the first-level blocks on the Y axis, and BlockCountZ represents the number of the first-level blocks on the Z axis;
the number SegCount (or segment size) of primary segments included in one segment;
A first starting offset (noted OffsetObjectInst) for indicating a starting offset of an object instance table in the scene file;
a second start offset (noted offsetlockindex) for indicating a start offset of the block description information of the target scene in the scene file;
a third start offset (noted offsetlockdata) is used to indicate the start offset of the chunk data information of the target scene in the scene file.
In some embodiments, as shown in FIG. 14, the object instance table includes at least one of the following information:
the object identification of the object to which each object instance included in the target scene belongs, the transformation data of each object instance with respect to the object to which each object instance belongs, for example, the start position coordinates (denoted as location= (x, y, z)) of the object instance, the scaling factor (denoted as scale= (ScaleX, scaleY, scaleZ)) of the object instance with respect to the object to which each object instance belongs, and the Rotation vector (rotation= (RotateX, rotateY, rotateZ)) of the object instance with respect to the object to which each object instance belongs. Optionally, an object instance identification may also be included.
In some embodiments, the tile description information of the target scene includes at least one of the following information:
hierarchical data corresponding to each of the L hierarchies in the target scene, segmented data corresponding to each of the segments in the hierarchy, and index data corresponding to each of the first-level segments in the segments;
The target scene is divided into L layers, each layer comprises at least one segment, each segment comprises at least one primary segment, and L is a positive integer.
For example, as shown in fig. 15, the target scene may be divided into at least one layer along the Z-axis direction, and each plane formed by x-y is defined as one layer, and then the layering range is 0 to BlockCountZ-1, and the index of the first-level block (x, y, Z) in each layering is calculated separately, and for the first-level block (x, y, Z), the layering index Z where the first-level block is located may be calculated according to the first-level block size BlockSizeZ in the Z-axis, and then the intra-layer index of the first-level block in the layering index Z is calculated.
In some embodiments, as shown in fig. 14, the hierarchically corresponding hierarchical data includes at least one of:
the method includes a hierarchical index, a minimum primary block index (denoted as MinIndex) in the hierarchical layer, a maximum primary block index (denoted as MaxIndex) in the hierarchical layer, a number of primary blocks (denoted as Count) in the hierarchical layer, a number of segments S in the hierarchical layer, a first index data offset (denoted as IndexOffsetL, or layer index offset) corresponding to the hierarchical layer, a first block data offset (denoted as DataOffset, or layer data offset) corresponding to the hierarchical layer, wherein the first index data offset is used for indicating a start offset of segment data corresponding to a first segment in the hierarchical layer, and the first block data offset is used for indicating a start offset of segment data included in the hierarchical layer.
For example, if the hierarchical data corresponding to the hierarchy n is denoted as Layer [ n ], layer [ n ] may include [ MinIndex, maxIndex, count, S, indexOffsetL, dataOffset ].
In some embodiments, the first index data offset may be an offset relative to a location pointed to by the second start offset (i.e., offsetlockindex), or alternatively, relative to a start location of the chunk description information.
In some embodiments, the first chunk data offset may be considered an offset relative to the location pointed to by the third starting offset (i.e., offsetlockdata), or alternatively, an offset relative to the starting location of the chunk data information.
If the index data of each first-level block in the hierarchy is represented by the first-level block index and the initial offset of the block data corresponding to the first-level block in the block data information, the occupied data length is larger, in the embodiment of the application, the first-level block can be segmented, for example, segCount first-level blocks in each hierarchy are divided into one segment, and the segment data corresponding to one segment can be represented by the initial first-level block index in one segment and the initial offset of the index data included in the segment, and the initial offset of the block data included in the segment, so that the data length occupied by the index data of the first-level block can be reduced, and the storage cost is reduced.
Fig. 16 is a schematic diagram of a segmentation method with segcount=4 according to the embodiment of the present application, but the present application is not limited thereto. It should be understood that the first level chunk index in a segment may be continuous or discontinuous, as not limited in this application.
In some embodiments, as shown in fig. 14, the segment data corresponding to the segment includes at least one of:
the first-level block index (marked as StartIndex) included in the segment, the second index data offset (marked as IndexOffsetS or segment index offset) corresponding to the segment, and the second block data offset (marked as BaseOffset or segment data offset) corresponding to the segment, wherein the second index data offset corresponding to the segment is used for indicating the start offset of the index data corresponding to the first-level block in the segment, and the second block data offset corresponding to the segment is used for indicating the start offset of the block data included in the first-level block included in the segment.
For example, if segment data corresponding to segment m in hierarchy n is denoted as Seg [ n ] [ m ], seg [ n ] [ m ] may include [ StartIndex, indexOffsetS, baseOffset ].
In some embodiments, the second index data offset may be an offset relative to a location pointed to by the first index data offset (IndexOffsetL), and the second chunk data offset may be an offset relative to a location pointed to by the first chunk data offset (i.e., dataOffset).
In some embodiments, as shown in fig. 14, the index data corresponding to each level one of the segments includes at least one of:
a first-level block index (marked as IndexXY') in the segment corresponding to the first-level block included in each segment;
a third block data offset (referred to as offset, or called data offset) corresponding to the first-level block index in each segment, for indicating a start offset of the block data included in the first-level block corresponding to the first-level block index in the segment;
the first-level chunk index in each segment corresponds to the size of the chunk data included in the first-level chunk, for example, the number of Bytes (Bytes) occupied by the chunk data included in the first-level chunk.
For example, the block index data BlockIndex [ n [ m ] [ k ] for the middle level block of segment m in hierarchy n may include [ IndexXY', offset, bytes ].
In some embodiments, the third chunk data offset may be considered an offset relative to the location to which the second chunk data offset (i.e., baseOffset) points.
The contents of the block data information in the scene file will be described below.
In some embodiments, the blocking data of the primary block includes primary blocking data of the primary block and secondary blocking data (or sub-block data, secondary sub-block data), wherein the primary blocking data is used for indicating object instance data included in the primary block, and the secondary blocking data of the primary block is used for indicating object instance data included in the secondary block of the primary block.
In some embodiments, the chunk data information of the target scene may be uniformly expressed as:
BlockData = { Index, objectArray = { ObjectInstID, objectInstData }, devide, { SubIndex, seqArray }. Wherein, blockData represents the block data corresponding to the primary block, index is the primary block Index, objectArray represents the primary block data corresponding to the primary block, i.e. the object instance array, devide, and { SubIndex, seqArray } is the secondary sub-block data corresponding to each secondary block, i.e. the secondary block data corresponding to the primary block. In some embodiments, the first-level chunk index may not be stored in the chunk data information, and which first-level chunk the chunk data corresponds to may be determined according to the information in the chunk description information.
In some embodiments, the first level chunk data corresponding to the first level chunk includes at least one of the following information:
the number P of object instances included in the primary partition;
the data length in the first-level block data;
description information of each object instance in the P object instances included in the first-level partition, wherein the description information comprises an object instance identification (ObjectInstID) and an offset of object instance data in the first-level partition data;
Object instance data (objectlnstdata) corresponding to each object instance.
In some embodiments, the ObjectInstID is used to indicate a sequence number in the object instance table, i.e., to indicate which object instance in the object instance table to point to.
In some embodiments, in the first-level partition data corresponding to the first-level partition, the object instance data corresponding to the object instance may refer to data of the object instance in the current first-level partition, or include data of the object instance in the second-level partition of the current first-level partition. For example, if the primary partition includes a region object instance, the object instance data includes a set of faces or line segments located in a secondary partition of the primary partition from among a plurality of faces or line segments included in the region object instance. For another example, if a point object instance is included in a level one partition, then the object instance data is null.
If the first-level block includes a voxel object instance, in the first-level block data corresponding to the first-level block, the object instance data corresponding to the voxel object instance may have the following implementation manner:
mode 1: the object instance data corresponding to the voxel object instance includes a start index startindex= (x, y, z) of a voxel block located in a first level block in a set of voxel blocks included in the voxel object instance, that is, an index of a first voxel block located in the first level block.
In this case, the voxel blocks included in each of the two-level blocks in the one-level block may be determined based on the start index in combination with voxel object data (e.g., a Size (Size) of a voxel object and the number K of voxel blocks) in the object file. Thus, for this voxel instance object, the voxel blocks comprised in the two-level partition may not be indicated by the sequence number array.
Mode 2: the object instance data corresponding to the voxel object instance comprises: voxel block data located in a secondary partition of the primary partition among a set of voxel blocks comprised by the voxel object instance, wherein the voxel block data may comprise distribution data of the voxel object instance, i.e. for indicating whether the voxel block belongs to the voxel object instance. In some embodiments, if the voxel block is aligned with the secondary partition, it may be determined whether the secondary partition includes the corresponding voxel block by using the voxel block data pointed to by the corresponding sequence number array of the secondary partition.
Since object data lengths of different types of object instances are different, in order to facilitate searching for object instance data corresponding to a second level partition, a storage manner in fig. 17 may be adopted, where the number of instances represents the number of object instances included in the first level partition, the number of bytes is the data length in the first level partition data, and then the number of instances P is a data sequence composed of an objectlnsitid and an offset, where the offset represents an offset of objectlnsitdata corresponding to the objectlnsitid in the first level partition data. Optionally, the objectlnsitid and offset are of fixed length. Optionally, for the voxel object instance, the objectlnstdata may further include type indication information, for indicating that the type of the voxel object instance data is the first type or the second type (i.e. represented in mode 1 or mode 2), where the object instance data of the first type includes: the voxel object instance comprises a start index of a voxel block in the first-level partition in a set of voxel blocks, and the second-type object quantity data comprises: the voxel object instance comprises voxel block data located in a secondary partition of the primary partition in a set of voxel blocks.
In some embodiments, the second-level data corresponding to a first-level chunk is used to indicate a sequence number array corresponding to a second-level chunk in the first-level chunk.
In some embodiments, the sequence number array corresponding to the second level of the packet may be represented as follows:
mode 1: sparse representation.
For example, the secondary partition data corresponding to a primary partition may include { SubIndex, seqArray } corresponding to each secondary partition in the target secondary partition in the primary partition. For example, the two-level hierarchical data including a point object instance or a region object instance may be stored in manner 1. The target secondary partition is a secondary partition that includes object instance data in the primary partition.
In some embodiments, this approach 1 is advantageous in reducing storage overhead when the number of secondary partitions is small.
Mode 2: and (5) grid representation.
For example, the secondary partition data corresponding to a primary partition includes { SeqArray } corresponding to each secondary partition of all secondary partitions in the primary partition. For example, the two-level hierarchical data including voxel object instances may be stored in manner 2.
In some specific implementations, the whole first-level partition is divided into grids according to the classification level of the second-level partition, the SeqArray corresponding to the second-level partition including the object instance data points to a specific sequence number in the object instance array, and the SeqArray corresponding to the second-level partition not including the object instance data includes a sequence number 0, or other invalid values, which indicate that the second-level partition does not include the object instance data.
For example, if a primary segment includes a region object instance and a voxel object instance, the segment data in the primary segment may be distributed as follows:
for example, a distribution of region object instances in a second-level partition in a first-level partition is represented by way 1, that is: blockData = { Index, objectArray = { RegionObjectInstID, objectInstData }, devide, { subindex, seqArray };
for example, the distribution of voxel object instances in a second level partition in the first level partition is represented by way 2, namely BlockData = { Index, objectarray = { VoxelObjectInstID, objectInstData }, dihide, { Seq Array }. The blocking data of the first level of blocking is a superposition of the two blocking data.
In a specific embodiment, the second-level data corresponding to the second-level data in the first-level block includes at least one of the following information:
the primary block comprises a secondary block quantity Q;
the type indication information of the second-level data corresponding to each second-level block in the Q second-level blocks is used for indicating that the type of the second-level data is a third type or a fourth type, wherein the second-level data of the third type comprises a second-level data index corresponding to the second-level data and a sequence number array corresponding to the second-level data, the fourth type comprises a sequence number array corresponding to the second-level data, and the sequence number array corresponding to the second-level data is used for indicating the sequence number of an object instance included in the second-level data in an object instance array included in the first-level block;
The offset of the secondary partition data corresponding to each of the Q secondary partitions;
and each secondary partition of the Q secondary partitions corresponds to secondary partition data.
The second-level data is a second-level block or a sparse sub-block of a third type, and the second-level data is a second-level block or a grid sub-block of a fourth type.
Fig. 18 is a schematic diagram of a storage format of two-level data according to an embodiment of the present application. As shown in fig. 18, the secondary partition data includes a data header portion and a data portion, wherein the data header portion may include a number field for indicating the number Q of secondary partitions, a Q group type field for indicating whether the secondary partition is a sparse sub-block or a grid block, an offset field for indicating the offset of the secondary partition data corresponding to the secondary partition, and a length field for indicating the length of the secondary partition data corresponding to the secondary partition. The data portion includes secondary block data corresponding to the secondary block, e.g., for sparse sub-blocks, a sub-block index and at least one sequence number. For the grid block, at least one sequence number is included.
In some embodiments, for one type of object instance, when the secondary partition data of the object instance is represented in mode 1, the data portion includes a secondary partition index corresponding to each secondary partition of K secondary partitions in the primary partition, the K secondary partitions being secondary partitions in the primary partition in which the object instance data of the first object instance exists; alternatively, when the secondary partition data of the object instance is represented in mode 2, the data portion includes a sequence number array corresponding to each of all the secondary partitions in the primary partition.
In some embodiments, the two-level partition data corresponding to different types of object instances in the first-level partition may be represented in the same manner, for example, in manner 2, or may be represented in different manners, for example, in manner 1 for a point object instance and a region object instance, and in manner 2 for a voxel object instance.
In some embodiments, each sparse sub-block may store only one type of object instance data, or may store multiple types of object instance data, e.g., multiple types of object instance data in a preset order, such as in the order of point object instances, region object instances, and voxel object instances. The sequence numbers corresponding to different sparse sub-blocks may be the same or may be different, and are determined by the number of sub-blocks in which the object instance is located.
If a primary partition includes an area object instance and a voxel object instance, the primary partition includes 4 secondary partitions (marked as Subindex 0-3), where a plane 0 and a plane 1 of the area object instance are respectively located in Subindex0 and 1, and 4 voxel blocks of the voxel object instance are respectively located in Subindex 0-3.
Case 1: the object instance array content corresponding to the first level partition is as follows (corresponding to mode 1 above):
RegionObjectInstID,{Face(0)};
RegionObjectInstID,{Face(1)};
VoxelObjectInstID,StartIndex。
that is, the object instance data corresponding to a primary segment is an index of a starting voxel block of the voxel object instance in the primary segment.
Case 2: the content of the object instance array corresponding to the first level partition is as follows (corresponding to the foregoing mode 2):
RegionObjectInstID,{Face(0)};
RegionObjectInstID,{Face(1)};
VoxelObjectInstID,{Voxeldata(0)};
VoxelObjectInstID,{Voxeldata(1)};
VoxelObjectInstID,{Voxeldata(2)};
VoxelObjectInstID,{Voxeldata(3)}。
that is, the object instance data corresponding to the first level of the partition includes 4 voxel block data, respectively corresponding to 4 voxel blocks, where the 4 voxel block data includes distribution data of the voxel object, and is used to determine the distribution of the voxel blocks of the voxel object instance in the first level of the partition, for example, a Voxeldata value of 1 indicates that the corresponding voxel block belongs to the voxel object instance.
Then for the region object instance, the secondary partition data may be represented as sparse sub-blocks, i.e., secondary partition data { SubIndex, seqArray } comprising secondary partition 0 and secondary partition 1 respectively corresponding to each other, respectively
{SubIndex0,Seq=0}、{SubIndex1,Seq=1}。
For case 1: for the region object instance, the secondary partition data may be represented by a grid block, that is, secondary partition data { SeqArray }, including secondary partitions 0 to 3, respectively, are each { seq=3 }. The data portion of the second-level hierarchical data of the first-level partition may include two sparse sub-blocks and 4 grid sub-blocks, the data in the 2 sparse sub-blocks being: { SubIndex0, seq=0 }, { SubIndex1, seq=1 }, the data in the 4 grid sub-blocks are { seq=3 }, i.e. the sequence numbers can be repeated.
For case 2: for the region object instance, the secondary partition data may be represented by a grid block, that is, secondary partition data { SeqArray }, corresponding to secondary partition 0 to 3, respectively, { seq=0 }, { seq=1 }, { seq=2 }, and { seq=3 }, that is, secondary partition 0 corresponds to Voxeldata (0), and whether secondary partition 0 includes a corresponding voxel block may be determined according to the value of Voxeldata (0); the second-level block 1 corresponds to the Voxeldata (1), and whether the second-level block 1 comprises a corresponding voxel block can be determined according to the value of the Voxeldata (1); the second-level block 2 corresponds to the Voxeldata (2), and whether the second-level block 2 comprises a corresponding voxel block can be determined according to the value of the Voxeldata (2); the second-level block 3 corresponds to Voxeldata (3), and whether the second-level block 3 includes a corresponding voxel block can be determined according to the value of the Voxeldata (3). The data portion of the second-level hierarchical data of the first-level partition may include two sparse sub-blocks and 4 grid sub-blocks, the data in the 2 sparse sub-blocks being: { SubIndex0, seq=0 }, { SubIndex1, seq=1 }, the data in the 4 grid sub-blocks are { seq=0 }, { seq=1 }, { seq=2 } and { seq=3 }, respectively.
In summary, the embodiment of the application defines three object types and corresponding data structures, which are used for representing the positions and distribution ranges of different object instances in a scene, further manages the scene in a two-level block mode, defines the unified coding and storage formats of scene data and object data, and improves the scene management efficiency.
And the scene management is performed in a two-stage block mode, so that position object judgment in a game scene and object inquiry in a range can be efficiently realized, and the two functions play a key role in a system for automatically rendering playing environment sound effects based on game scene data.
Hereinafter, another scene data processing method provided in the embodiment of the present application will be described with reference to fig. 19, and the scene data processing method may be performed by a decoding end, which may be a device with a calculation processing function, or may be provided in a device with a calculation processing function, which may be a terminal or a server, for example. It should be understood that the behaviors of the decoding end and the encoding end correspond to each other, and similar descriptions can refer to related descriptions of the encoding end, and for brevity, description is omitted here.
Referring to fig. 19, the scene data processing method 300 may include at least part of the following:
s310, acquiring an object data structure, wherein the object data structure is used for indicating at least one object type and object data respectively corresponding to at least one object, the object type comprises at least one of a point object, a region object and a voxel object, and the object data is used for indicating the distribution range of the object;
s310, acquiring a scene data structure corresponding to a target scene, wherein the target scene is divided into N first-level blocks, each first-level block is divided into M second-level blocks, and the scene data structure is used for indicating at least one of object instance data in the target scene, object instance data in a target first-level block in the target scene and object instance data in a second-level block in the target first-level block, and the target first-level block comprises part or all of the N first-level blocks;
and S310, managing the target scene according to the scene data structure and the object data structure.
In some embodiments, if the object type is a point object, the object data of the point object is null;
If the object type is a region object, in the case that the region object is a three-dimensional region object, the object data of the three-dimensional region object includes coordinates of one point on each of a plurality of surfaces constituting the three-dimensional region object, a normal vector of each surface, and a direction of each corresponding region, and in the case that the region object is a two-dimensional region object, the object data of the two-dimensional region object includes coordinates of two end points on each of a plurality of line segments constituting the two-dimensional region object, and a direction of each line segment corresponding region;
if the object type is a voxel object, the object data of the voxel object comprises at least one of a size of the voxel object, a number X of voxel blocks included in the voxel object and X voxel block data. Wherein the number X of voxel blocks comprised by the voxel object refers to: the number of voxel blocks included in the bounding box of the voxel object.
In some embodiments, the scene data structure includes at least one of:
the object instance table corresponding to the target scene is used for indicating information of object instances included in the target scene;
The object instance array corresponding to each first-level block in the target first-level block is used for indicating information of object instances included in the first-level block;
and the sequence number array corresponding to the second-level partition in each first-level partition is used for indicating the sequence number of the object instance included in the second-level partition in the object instance array corresponding to the first-level partition.
In some embodiments, the object instance table includes: the object identification of the object to which each object instance belongs and the transformation data of each object instance relative to the object to which each object instance belongs are included in the target scene, wherein the transformation data of each object instance relative to the object to which each object instance belongs comprises at least one of a starting position coordinate of each object instance, a scaling factor of each object instance relative to the object to which each object instance belongs, and a rotation vector of each object instance relative to the object to which each object instance belongs.
In some embodiments, the object instance array corresponding to the first level partition includes: and the object instance identifier in the object instance array points to one object instance in the object instance table.
In some embodiments, if the first level partition includes a region object instance, the object instance data is used to indicate a set of faces or line segments located in a second level partition of the first level partition among a plurality of faces or line segments included in the region object instance;
if a primary segment includes a voxel object instance, the object instance data is used for indicating a start index of a voxel block located in the primary segment in a set of voxel blocks included in the voxel object instance, or the object instance data is used for indicating voxel block data located in a secondary segment of the primary segment in the set of voxel blocks included in the voxel object instance.
In some embodiments, the acquire object data structure includes:
acquiring an object file, wherein the object file is used for storing description information of at least one object and object data of each object in the at least one object, the description information of the object comprises an object type, and the object data is used for describing a distribution range of the object;
and analyzing the object file to obtain the object data structure.
In some embodiments, the object file is obtained from the encoding end.
In some embodiments, the description information of the at least one object includes an object identifier corresponding to each object, an object type, and a start offset of object data of each object in the object file.
In some embodiments, the acquiring the scene data structure corresponding to the target scene includes:
acquiring a scene file, wherein the scene file is used for storing at least one of object instance data in a target scene, primary block data in primary blocks in the target scene and secondary block data of secondary blocks in the primary blocks;
and analyzing the scene file to obtain a scene data structure corresponding to the target scene.
In some embodiments, the scene file is obtained from the encoding end.
In some embodiments, the scene file includes at least one of:
a file header for storing at least one of description information of the target scene and an offset of other parts in the scene file;
an object instance table corresponding to the target scene;
the block description information of the target scene is used for indicating index data of a first-level block in the scene file, wherein the index data of the first-level block comprises at least one of a starting offset of the block data of the first-level block and a length of the block data of the first-level block;
The object scene includes block data information of the object scene, the block data of the first-level block being used for indicating block data of a first-level block in the object scene, wherein the block data of the first-level block includes at least one of first-level block data and second-level block data of the first-level block, the first-level block data being used for indicating object instance data included in the first-level block, and the second-level block data of the first-level block being used for indicating object instance data included in the second-level block of the first-level block.
In some embodiments, the description information of the target scene includes at least one of:
the method comprises the steps of starting a position of the target scene, the size of a first-level block in the target scene, the number of first-level blocks in the target scene, and the number of segments in the target scene, wherein one segment comprises at least one first-level block.
In some embodiments, the tile description information of the target scene includes at least one of the following information:
hierarchical data corresponding to each of the L hierarchies in the target scene, segmented data corresponding to each of the segments in the hierarchy, and index data corresponding to each of the first-level segments in the segments;
The target scene is divided into L layers, each layer comprises at least one segment, each segment comprises at least one primary segment, and L is a positive integer.
In some embodiments, the hierarchically corresponding hierarchical data includes at least one of:
the method comprises the steps of layering an index, a minimum first-level block index in layering, a maximum first-level block index in layering, the number of first-level blocks in layering, the number S of segments in layering, a first index data offset corresponding to layering and a first block data offset corresponding to layering, wherein the first index data offset is used for indicating the initial offset of segment data corresponding to the first segment in layering in the block description information, and the first block data offset is used for indicating the initial offset of the block data included in layering in the block data information.
In some embodiments, the segment data corresponding to the segment includes at least one of:
the first-level block index corresponding to the segment, the second index data offset corresponding to the segment, and the second block data offset corresponding to the segment are included in the segment, wherein the second index data offset corresponding to the segment is used for indicating the initial offset of the index data corresponding to the first-level block in the segment relative to the segment data corresponding to the segment, and the second block data offset corresponding to the segment is used for indicating the initial offset of the block data corresponding to the segment relative to the block data included in the segment belonging to the layer.
In some embodiments, the index data corresponding to each level of the segments includes at least one of:
the segments comprise first-level block indexes in the segments corresponding to the first-level blocks;
the third block data offset corresponding to the first-level block index in each segment is used for indicating the initial offset of the block data included in the first-level block corresponding to the first-level block index in the segment relative to the block data included in the segment;
the first level chunk index in each segment corresponds to the size of the chunk data included in the first level chunk.
In some embodiments, the first level chunk data corresponding to the first level chunk includes at least one of the following information:
the number P of object instances included in the primary partition;
the data length in the first-level block data;
description information of each object instance in the P object instances, wherein the description information comprises an object instance identification and an offset of object instance data in the primary partition data;
and the object instance data corresponding to each object instance.
In some embodiments, if the first level partition includes a region object instance, the object instance data includes a set of faces or line segments located in a second level partition of the first level partition from a plurality of faces or line segments included in the region object instance;
If the first level partition includes voxel object instances, the object instance data is of a first type or a second type, and the object instance data of the first type includes: the voxel object instance comprises a start index of a voxel block in the first-level partition in a set of voxel blocks, and the second-type object quantity data comprises: the voxel object instance comprises voxel block data located in a secondary partition of the primary partition in a set of voxel blocks.
In some embodiments, the object instance data further includes type indication information for indicating that the type of the object instance data is the first type or the second type.
In some embodiments, the second level data corresponding to the second level of the first level of the blocks includes at least one of the following information:
the primary block comprises a secondary block quantity Q;
the type indication information of the second-level block data corresponding to each second-level block in the Q second-level blocks is used for indicating that the type of the second-level block data is a third type or a fourth type, wherein the second-level block data of the third type comprises a second-level block index corresponding to the second-level block and an instance serial number corresponding to the second-level block, the fourth type comprises a serial number array corresponding to the second-level block, and the serial number array corresponding to the second-level block is used for indicating the serial number of an object instance included in the second-level block in an object instance array included in the first-level block;
The offset of the secondary partition data corresponding to each of the Q secondary partitions;
and each secondary partition of the Q secondary partitions corresponds to secondary partition data.
In some embodiments, the Q secondary partitions include all secondary partitions in the primary partition, or only secondary partitions that include object instance data.
In some embodiments, the parsing the scene file to obtain a data structure corresponding to the target scene includes:
determining a target layering of the target first-level partitioning according to a first-level partitioning index corresponding to the target first-level partitioning and layering data corresponding to L layering in the scene file;
acquiring the number S of segments in the target hierarchy, a first index data offset corresponding to the target hierarchy and a first block data offset corresponding to the target hierarchy according to the segment data corresponding to the segments in the target hierarchy;
reading segment data corresponding to S segments at the position of the first index data offset from the segment description information, and obtaining segment data corresponding to each segment in the S segments, wherein the segment data comprises a first-level segment index, a second index data offset corresponding to the segment and a second segment data offset corresponding to the segment;
Determining a target segment where the target first-level block is located according to a starting first-level block index included in each segment and an intra-layer index corresponding to the target first-level block in the target hierarchy;
reading index data corresponding to K first-level blocks of a second index data offset position corresponding to the target segment from the first index data offset position, wherein K is the number of first-level blocks included in the target segment;
determining a first-level block index of the target first-level block in a segment in the target segment according to a first-level block index included in the target segment and an intra-layer index corresponding to the target first-level block in the target layer;
determining a target first-level block index corresponding to the target first-level block in the target segment according to the first-level block index in the segment corresponding to the target first-level block;
acquiring index data corresponding to the target first-level block index, wherein the index data comprises a third block data offset and length information of block data;
taking the sum of the first block data offset, the second block data offset and the third block data offset as a corresponding target offset of the block data of the target first-level block in the block data information;
Acquiring the block data included in the target first-level block from the block data information according to the target offset and the length information of the block data;
and determining the first-level block data corresponding to the target first-level block and the second-level block data corresponding to the second-level block in the target first-level block according to the block data included in the target first-level block.
In some embodiments, the managing the target scene according to the scene data structure and the object data structure includes:
acquiring the starting position and the distribution range of an object instance to be added;
determining a first secondary partition block for adding the object instance to be added into a first primary partition block in the target scene according to the starting position and the distribution range of the object instance to be added, wherein the object instance to be added at least partially overlaps with the first secondary partition block;
acquiring object data of an object to which the object to be added belongs from the object data structure;
determining transformation data of the object to be added relative to the object to be added according to the starting position and the distribution range of the object to be added and the object data of the object to be added;
And adding the object to be added to the object instance table based on the transformation data of the object to be added relative to the object to be added, and adding the object to be added to an object instance array corresponding to the first level-one partition and a sequence number array corresponding to the first level-two partition to obtain an updated scene data structure.
In some embodiments, the managing the target scene according to the scene data structure and the object data structure includes:
acquiring a range to be queried;
determining a second level partition in the target scene according to the range to be queried;
acquiring a target object instance and object instance data corresponding to the target object instance, which are included in the second-level partition, according to a scene data structure corresponding to the target scene;
acquiring object data of an object to which the target object instance belongs according to the object data structure;
determining distribution data of the target object instance according to object instance data corresponding to the target object instance and object data of an object to which the target object instance belongs, wherein the distribution data is used for describing position information and a distribution range of the target object instance;
And determining whether the range to be queried comprises the target object instance according to the distribution data of the target object instance.
In some embodiments, the managing the target scene according to the scene data structure and the object data structure includes:
obtaining an object instance identifier of an object instance to be deleted;
and deleting the information of the object instance to be deleted from the object instance table.
Next, a method of acquiring the block data of the first level block, in which the Index index= (IndexX, indexY, indexZ) of the first level block, first needs to acquire the offset of the block data of the first level block in the block data information, will be described with reference to fig. 20. Fig. 20 shows a calculation process of the offset amount of the block data of the one-level block. As shown in fig. 17, the steps may be included as follows:
step 401: the following information is obtained from the file header and file description information of the scene file:
1. description information of the scene: first level block size
BlockCount:=(BlockCountX,BlockCountY,BlockCountZ)。
2. Block description information:
hierarchical data Layer 0, …, blockCountZ ], wherein,
Layer[n]={MinIndex,MaxIndex,Count,SegCount,IndexOffsetL,DataOffset};
nth layer mth segment data:
segment [ n ] [ m ] = { StartIndex, indexOffsetS, baseOffset };
kth index data of the nth layer mth segment:
The block [ n ] [ m ] [ k ] = { index xy', offset, bytes }.
The meaning of each symbol refers to the related description of the foregoing embodiment, and for brevity, the description is omitted here.
Step 402: a first level blocking Index index= (IndexX, indexY, indexZ) is obtained.
Step 403: and calculating the layering of the first-level block index, and acquiring layering data of the layering.
The hierarchical data of the Lay [ IndexZ ] is obtained by the hierarchical n=IndexZ where the first-level block index is located: minIndex and MaxIndex.
Step 404: and calculating an in-layer index IndexXY= (x+y) corresponding to the first-level block index, and inquiring whether the IndexXY is in [ MinIndex, maxIndex ].
If yes, step 405 is executed, otherwise, the flow is ended, that is, the blocking data of the first-level blocking does not exist.
Step 405: obtaining layered data of Lay [ IndexZ ]: segment number S, layer index offset index l, layer data offset DataOffset.
Step 406: reading segment data Seg [ n ] [0, …, S-1] of the segment description information start offset IndexOffsetL position: startIndex, indexOffsetS, baseOffset.
And determining a segment m where the first-level block is positioned according to the in-layer index IndexXY of the first-level block by combining the start index StartIndex of each segment in the Lay [ IndexZ ], and acquiring a segment index offset IndexOffsetS and a segment data offset BaseOffset corresponding to the segment m.
Step 407: reading the index data of the segment data start offset IndexOffsetS position
BlockIndex[n][m][…]。
And calculating an in-segment index IndexXY 'of the first-level block in the segment m according to the in-segment index IndexXY corresponding to the first-level block, wherein IndexXY' =IndexXY-Seg [ n ] [ m ] { StartIndex }.
Then, according to the index' in the segment, matching search is performed in the index value of the BlockIndex [ n ] [ m ] [ … ], for example, matching search is performed by adopting a binary search mode, and the target index k is determined.
Step 408: it is determined whether an index k exists.
If yes, go to step 409, otherwise, end the flow.
Step 409: index data of Block index [ n ] [ m ] [ k ] is obtained: indexXY', offset, bytes.
And determining a target Offset of the block data of the first-level block in the block data information according to DataOffset, baseOffset and the Offset obtained in the previous step, wherein the target Offset of the block data of the first-level block in the block data information is the sum of DataOffset, baseOffset and the Offset. That is, the target offset of the block data of the first level block is: dataOffset+BaseOffset+BlockIndex [ n ] [ m ] [ k ] { Offset }.
Further, the block data information may be read to start the block data of the target offset position, to obtain the block data of the first level block.
The following describes apparatus embodiments of the present application that may be used to perform the methods described in the above embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments described above in the present application.
Fig. 21 shows a schematic block diagram of a scene data processing apparatus 500 according to an embodiment of the present application, which may be provided in a device with a calculation processing function, such as a terminal device or a server. The apparatus 500 may correspond to the scene data processing apparatus in the method 200.
Referring to fig. 21, a scene data processing apparatus 500 according to an embodiment of the present application includes:
an obtaining unit 510, configured to obtain an object instance data structure corresponding to a physical scene, where the object instance data structure includes an object instance identifier, an object identifier of an object to which the object belongs, and object instance data corresponding to at least one object instance in the physical scene, where the object instance data is used to describe location information and a distribution range of the object instance;
a data processing unit 520, configured to process the object instance data structure to obtain an object data structure, where the object data structure includes at least one object type and object data corresponding to at least one object, the object type includes at least one of a point object, a region object, and a voxel object, and the object data is used to indicate a distribution range of the object;
An adding unit 530, configured to add the at least one object instance to a target scene according to the object instance data structure;
a scene management unit 540, configured to divide the target scene, and determine a scene data structure corresponding to the target scene according to a distribution of object instances in the divided target scene, where the scene data structure is configured to indicate at least one of object instance data in the target scene, primary partition data in a target primary partition in the target scene, and secondary partition data in a secondary partition of the target primary partition;
and the encoding unit 550 is configured to perform encoding processing on the scene data structure to obtain a scene file, and perform encoding processing on the object data structure to obtain an object file.
In some embodiments, in the object instance data structure, if the object instance is a point object instance, the object instance data of the point object instance is coordinate information of the point object instance; if the object instance is a region object instance, the object instance data of the region object instance is a set of a plurality of faces or a plurality of line segments constituting the region object instance; if the object instance is a voxel object instance, the object instance data of the voxel object instance is a set of a plurality of voxel blocks constituting the voxel object instance.
In some embodiments, the data processing unit 520 is further configured to
Object instance data of the at least one object instance and object data of the at least one object, determining transformation data of each object instance of the at least one object instance with respect to the belonging object;
and adding the object identification of the object to which each object instance belongs and the transformation data of each object instance relative to the object into the object instance table.
In some embodiments, if the object type is a point object, the object data of the point object is null;
if the object type is a region object, in the case that the region object is a three-dimensional region object, the object data of the three-dimensional region object includes coordinates of one point on each of a plurality of surfaces constituting the three-dimensional region object, a normal vector of each surface, and a direction of each corresponding region, and in the case that the region object is a two-dimensional region object, the object data of the two-dimensional region object includes coordinates of two end points on each of a plurality of line segments constituting the two-dimensional region object, and a direction of each line segment corresponding region;
If the object type is a voxel object, the object data of the voxel object comprises at least one of a size of the voxel object, a number X of voxel blocks included in the voxel object and X voxel block data. Wherein the number X of voxel blocks comprised by the voxel object refers to: the number of voxel blocks included in the bounding box of the voxel object.
In some embodiments, the scene data structure includes at least one of:
the object instance table corresponding to the target scene is used for indicating information of object instances included in the target scene;
the object instance array corresponding to each first-level block in the target first-level block is used for indicating information of object instances included in the first-level block;
and the sequence number array corresponding to the second-level partition in each first-level partition is used for indicating the sequence number of the object instance included in the second-level partition in the object instance array corresponding to the first-level partition.
In some embodiments, the object instance table includes: the object identification of the object to which each object instance belongs and the transformation data of each object instance relative to the object to which each object instance belongs are included in the target scene, wherein the transformation data of each object instance relative to the object to which each object instance belongs comprises at least one of a starting position coordinate of each object instance, a scaling factor of each object instance relative to the object to which each object instance belongs, and a rotation vector of each object instance relative to the object to which each object instance belongs.
In some embodiments, the object instance array corresponding to the first level partition includes: and the object instance identifier in the object instance array points to one object instance in the object instance table.
In some embodiments, if the first level partition includes a region object instance, the object instance data is used to indicate a set of faces or line segments located in a second level partition of the first level partition among a plurality of faces or line segments included in the region object instance;
if a primary segment includes a voxel object instance, the object instance data is used for indicating a start index of a voxel block located in the primary segment in a set of voxel blocks included in the voxel object instance, or the object instance data is used for indicating voxel block data located in a secondary segment of the primary segment in the set of voxel blocks included in the voxel object instance.
In some embodiments, the description information of the at least one object includes an object identifier corresponding to each object, an object type, and a start offset of object data of each object in the object file.
In some embodiments, the scene file includes at least one of:
a file header for storing at least one of description information of the target scene and an offset of other parts in the scene file;
an object instance table corresponding to the target scene;
the block description information of the target scene is used for indicating index data of a first-level block in the scene file, wherein the index data of the first-level block comprises at least one of a starting offset of the block data of the first-level block and a length of the block data of the first-level block;
the object scene includes block data information of the object scene, the block data of the first-level block being used for indicating block data of a first-level block in the object scene, wherein the block data of the first-level block includes at least one of first-level block data and second-level block data of the first-level block, the first-level block data being used for indicating object instance data included in the first-level block, and the second-level block data of the first-level block being used for indicating object instance data included in the second-level block of the first-level block.
In some embodiments, the description information of the target scene includes at least one of:
The method comprises the steps of starting a position of the target scene, the size of a first-level block in the target scene, the number of first-level blocks in the target scene, and the number of segments in the target scene, wherein one segment comprises at least one first-level block.
In some embodiments, the tile description information of the target scene includes at least one of the following information:
hierarchical data corresponding to each of the L hierarchies in the target scene, segmented data corresponding to each of the segments in the hierarchy, and index data corresponding to each of the first-level segments in the segments;
the target scene is divided into L layers, each layer comprises at least one segment, each segment comprises at least one primary segment, and L is a positive integer.
In some embodiments, the hierarchically corresponding hierarchical data includes at least one of:
the method comprises the steps of layering an index, a minimum first-level block index in layering, a maximum first-level block index in layering, the number of first-level blocks in layering, the number S of segments in layering, a first index data offset corresponding to layering and a first block data offset corresponding to layering, wherein the first index data offset is used for indicating the initial offset of segment data corresponding to the first segment in layering in the block description information, and the first block data offset is used for indicating the initial offset of the block data included in layering in the block data information.
In some embodiments, the segment data corresponding to the segment includes at least one of:
the first-level block index corresponding to the segment, the second index data offset corresponding to the segment, and the second block data offset corresponding to the segment are included in the segment, wherein the second index data offset corresponding to the segment is used for indicating the initial offset of the index data corresponding to the first-level block in the segment relative to the segment data corresponding to the segment, and the second block data offset corresponding to the segment is used for indicating the initial offset of the block data corresponding to the segment relative to the block data included in the segment belonging to the layer.
In some embodiments, the index data corresponding to each level of the segments includes at least one of:
the segments comprise first-level block indexes in the segments corresponding to the first-level blocks;
the third block data offset corresponding to the first-level block index in each segment is used for indicating the initial offset of the block data included in the first-level block corresponding to the first-level block index in the segment relative to the block data included in the segment;
the first level chunk index in each segment corresponds to the size of the chunk data included in the first level chunk.
In some embodiments, the first level chunk data corresponding to the first level chunk includes at least one of the following information:
the number P of object instances included in the primary partition;
the data length in the first-level block data;
description information of each object instance in the P object instances, wherein the description information comprises an object instance identification and an offset of object instance data in the primary partition data;
and the object instance data corresponding to each object instance.
In some embodiments, if the first level partition includes a region object instance, the object instance data includes a set of faces or line segments located in a second level partition of the first level partition from a plurality of faces or line segments included in the region object instance;
if the first level partition includes voxel object instances, the object instance data is of a first type or a second type, and the object instance data of the first type includes: the voxel object instance comprises a start index of a voxel block in the first-level partition in a set of voxel blocks, and the second-type object quantity data comprises: the voxel object instance comprises voxel block data located in a secondary partition of the primary partition in a set of voxel blocks.
In some embodiments, the object instance data further includes type indication information for indicating that the type of the object instance data is the first type or the second type.
In some embodiments, the second level data corresponding to the second level of the first level of the blocks includes at least one of the following information:
the primary block comprises a secondary block quantity Q;
the type indication information of the second-level block data corresponding to each second-level block in the Q second-level blocks is used for indicating that the type of the second-level block data is a third type or a fourth type, wherein the second-level block data of the third type comprises a second-level block index corresponding to the second-level block and an instance serial number corresponding to the second-level block, the fourth type comprises a serial number array corresponding to the second-level block, and the serial number array corresponding to the second-level block is used for indicating the serial number of an object instance included in the second-level block in an object instance array included in the first-level block;
the offset of the secondary partition data corresponding to each of the Q secondary partitions;
and each secondary partition of the Q secondary partitions corresponds to secondary partition data.
In some embodiments, the Q secondary partitions include all secondary partitions in the primary partition, or only secondary partitions that include object instance data.
In some embodiments, the scene management unit 550 is further configured to:
dividing the target scene according to a preset first-level block size to obtain N first-level blocks;
and performing secondary partition on the N primary partitions according to the partition level of the secondary partitions and the number of object instances included in each primary partition, wherein the partition level of the secondary partitions is used for indicating a threshold value of the number of object instances included in one secondary partition.
In some embodiments, the obtaining unit 510 is further configured to:
acquiring object instance data corresponding to point object instances or object instance data corresponding to regional object instances in the physical scene;
and carrying out gridding treatment on the object instance data corresponding to the point object instance or the object instance data corresponding to the area object instance to obtain the object instance data of the voxel object.
Fig. 22 shows a schematic block diagram of a scene data processing apparatus 600 according to another embodiment of the present application, which may be provided in a device with a calculation processing function, such as a terminal device or a server. The apparatus 600 may correspond to the scene data processing apparatus in the method 300.
Referring to fig. 22, a scene data processing apparatus 600 according to an embodiment of the present application includes:
a first obtaining unit 610, configured to obtain an object data structure, where the object data structure is used to indicate an object type and object data corresponding to at least one object, where the object type includes at least one of a point object, a region object, and a voxel object, and the object data is used to indicate a distribution range of the object;
a second obtaining unit 620, configured to obtain a scene data structure corresponding to a target scene, where the target scene is divided into N primary partitions, each primary partition is divided into M secondary partitions, and the scene data structure is configured to indicate at least one of object instance data in the target scene, object instance data in a target primary partition in the target scene, and object instance data in a secondary partition in the target primary partition, where the target primary partition includes some or all of the N primary partitions;
and a scene management unit 630, configured to manage the target scene according to the scene data structure and the object data structure.
In some embodiments, if the object type is a point object, the object data of the point object is null;
if the object type is a region object, in the case that the region object is a three-dimensional region object, the object data of the three-dimensional region object includes coordinates of one point on each of a plurality of surfaces constituting the three-dimensional region object, a normal vector of each surface, and a direction of each corresponding region, and in the case that the region object is a two-dimensional region object, the object data of the two-dimensional region object includes coordinates of two end points on each of a plurality of line segments constituting the two-dimensional region object, and a direction of each line segment corresponding region;
if the object type is a voxel object, the object data of the voxel object comprises at least one of a size of the voxel object, a number X of voxel blocks included in the voxel object and X voxel block data. Wherein the number X of voxel blocks comprised by the voxel object refers to: the number of voxel blocks included in the bounding box of the voxel object.
In some embodiments, the scene data structure includes at least one of:
the object instance table corresponding to the target scene is used for indicating information of object instances included in the target scene;
The object instance array corresponding to each first-level block in the target first-level block is used for indicating information of object instances included in the first-level block;
and the sequence number array corresponding to the second-level partition in each first-level partition is used for indicating the sequence number of the object instance included in the second-level partition in the object instance array corresponding to the first-level partition.
In some embodiments, the object instance table includes: the object identification of the object to which each object instance belongs and the transformation data of each object instance relative to the object to which each object instance belongs are included in the target scene, wherein the transformation data of each object instance relative to the object to which each object instance belongs comprises at least one of a starting position coordinate of each object instance, a scaling factor of each object instance relative to the object to which each object instance belongs, and a rotation vector of each object instance relative to the object to which each object instance belongs.
In some embodiments, the object instance array corresponding to the first level partition includes: and the object instance identifier in the object instance array points to one object instance in the object instance table.
In some embodiments, if the first level partition includes a region object instance, the object instance data is used to indicate a set of faces or line segments located in a second level partition of the first level partition among a plurality of faces or line segments included in the region object instance;
if a primary segment includes a voxel object instance, the object instance data is used for indicating a start index of a voxel block located in the primary segment in a set of voxel blocks included in the voxel object instance, or the object instance data is used for indicating voxel block data located in a secondary segment of the primary segment in the set of voxel blocks included in the voxel object instance.
In some embodiments, the first obtaining unit 610 is further configured to:
acquiring an object file, wherein the object file is used for storing description information of at least one object and object data of each object in the at least one object, the description information of the object comprises an object type, and the object data is used for describing a distribution range of the object;
and analyzing the object file to obtain the object data structure.
In some embodiments, the description information of the at least one object includes an object identifier corresponding to each object, an object type, and a start offset of object data of each object in the object file.
In some embodiments, the second obtaining unit 620 is further configured to:
acquiring a scene file, wherein the scene file is used for storing at least one of object instance data in a target scene, primary block data in primary blocks in the target scene and secondary block data of secondary blocks in the primary blocks;
analyzing the scene file to obtain a scene data structure corresponding to the target scene;
and analyzing the scene file to obtain a scene data structure corresponding to the target scene.
In some embodiments, the scene file includes at least one of:
a file header for storing at least one of description information of the target scene and an offset of other parts in the scene file;
an object instance table corresponding to the target scene;
the block description information of the target scene is used for indicating index data of a first-level block in the scene file, wherein the index data of the first-level block comprises at least one of a starting offset of the block data of the first-level block and a length of the block data of the first-level block;
The object scene includes block data information of the object scene, the block data of the first-level block being used for indicating block data of a first-level block in the object scene, wherein the block data of the first-level block includes at least one of first-level block data and second-level block data of the first-level block, the first-level block data being used for indicating object instance data included in the first-level block, and the second-level block data of the first-level block being used for indicating object instance data included in the second-level block of the first-level block.
In some embodiments, the description information of the target scene includes at least one of:
the method comprises the steps of starting a position of the target scene, the size of a first-level block in the target scene, the number of first-level blocks in the target scene, and the number of segments in the target scene, wherein one segment comprises at least one first-level block.
In some embodiments, the tile description information of the target scene includes at least one of the following information:
hierarchical data corresponding to each of the L hierarchies in the target scene, segmented data corresponding to each of the segments in the hierarchy, and index data corresponding to each of the first-level segments in the segments;
The target scene is divided into L layers, each layer comprises at least one segment, each segment comprises at least one primary segment, and L is a positive integer.
In some embodiments, the hierarchically corresponding hierarchical data includes at least one of:
the method comprises the steps of layering an index, a minimum first-level block index in layering, a maximum first-level block index in layering, the number of first-level blocks in layering, the number S of segments in layering, a first index data offset corresponding to layering and a first block data offset corresponding to layering, wherein the first index data offset is used for indicating the initial offset of segment data corresponding to the first segment in layering in the block description information, and the first block data offset is used for indicating the initial offset of the block data included in layering in the block data information.
In some embodiments, the segment data corresponding to the segment includes at least one of:
the first-level block index corresponding to the segment, the second index data offset corresponding to the segment, and the second block data offset corresponding to the segment are included in the segment, wherein the second index data offset corresponding to the segment is used for indicating the initial offset of the index data corresponding to the first-level block in the segment relative to the segment data corresponding to the segment, and the second block data offset corresponding to the segment is used for indicating the initial offset of the block data corresponding to the segment relative to the block data included in the segment belonging to the layer.
In some embodiments, the index data corresponding to each level of the segments includes at least one of:
the segments comprise first-level block indexes in the segments corresponding to the first-level blocks;
the third block data offset corresponding to the first-level block index in each segment is used for indicating the initial offset of the block data included in the first-level block corresponding to the first-level block index in the segment relative to the block data included in the segment;
the first level chunk index in each segment corresponds to the size of the chunk data included in the first level chunk.
In some embodiments, the first level chunk data corresponding to the first level chunk includes at least one of the following information:
the number P of object instances included in the primary partition;
the data length in the first-level block data;
description information of each object instance in the P object instances, wherein the description information comprises an object instance identification and an offset of object instance data in the primary partition data;
and the object instance data corresponding to each object instance.
In some embodiments, if the first level partition includes a region object instance, the object instance data includes a set of faces or line segments located in a second level partition of the first level partition from a plurality of faces or line segments included in the region object instance;
If the first level partition includes voxel object instances, the object instance data is of a first type or a second type, and the object instance data of the first type includes: the voxel object instance comprises a start index of a voxel block in the first-level partition in a set of voxel blocks, and the second-type object quantity data comprises: the voxel object instance comprises voxel block data located in a secondary partition of the primary partition in a set of voxel blocks.
In some embodiments, the object instance data further includes type indication information for indicating that the type of the object instance data is the first type or the second type.
In some embodiments, the second level data corresponding to the second level of the first level of the blocks includes at least one of the following information:
the primary block comprises a secondary block quantity Q;
the type indication information of the second-level block data corresponding to each second-level block in the Q second-level blocks is used for indicating that the type of the second-level block data is a third type or a fourth type, wherein the second-level block data of the third type comprises a second-level block index corresponding to the second-level block and an instance serial number corresponding to the second-level block, the fourth type comprises a serial number array corresponding to the second-level block, and the serial number array corresponding to the second-level block is used for indicating the serial number of an object instance included in the second-level block in an object instance array included in the first-level block;
The offset of the secondary partition data corresponding to each of the Q secondary partitions;
and each secondary partition of the Q secondary partitions corresponds to secondary partition data.
In some embodiments, the Q secondary partitions include all secondary partitions in the primary partition, or only secondary partitions that include object instance data.
In some embodiments, the second obtaining unit 620 is further configured to:
determining a target layering of the target first-level partitioning according to a first-level partitioning index corresponding to the target first-level partitioning and layering data corresponding to L layering in the scene file;
acquiring the number S of segments in the target hierarchy, a first index data offset corresponding to the target hierarchy and a first block data offset corresponding to the target hierarchy according to the segment data corresponding to the segments in the target hierarchy;
reading segment data corresponding to S segments at the position of the first index data offset from the segment description information, and obtaining segment data corresponding to each segment in the S segments, wherein the segment data comprises a first-level segment index, a second index data offset corresponding to the segment and a second segment data offset corresponding to the segment;
Determining a target segment where the target first-level block is located according to a starting first-level block index included in each segment and an intra-layer index corresponding to the target first-level block in the target hierarchy;
reading index data corresponding to K first-level blocks of a second index data offset position corresponding to the target segment from the first index data offset position, wherein K is the number of first-level blocks included in the target segment;
determining a first-level block index of the target first-level block in a segment in the target segment according to a first-level block index included in the target segment and an intra-layer index corresponding to the target first-level block in the target layer;
determining a target first-level block index corresponding to the target first-level block in the target segment according to the first-level block index in the segment corresponding to the target first-level block;
acquiring index data corresponding to the target first-level block index, wherein the index data comprises a third block data offset and length information of block data;
taking the sum of the first block data offset, the second block data offset and the third block data offset as a corresponding target offset of the block data of the target first-level block in the block data information;
Acquiring the block data included in the target first-level block from the block data information according to the target offset and the length information of the block data;
and determining the first-level block data corresponding to the target first-level block and the second-level block data corresponding to the second-level block in the target first-level block according to the block data included in the target first-level block.
In some embodiments, the scene management unit 630 is further configured to:
acquiring the starting position and the distribution range of an object instance to be added;
determining a first secondary partition block for adding the object instance to be added into a first primary partition block in the target scene according to the starting position and the distribution range of the object instance to be added, wherein the object instance to be added at least partially overlaps with the first secondary partition block;
acquiring object data of an object to which the object to be added belongs from the object data structure;
determining transformation data of the object to be added relative to the object to be added according to the starting position and the distribution range of the object to be added and the object data of the object to be added;
and adding the object to be added to the object instance table based on the transformation data of the object to be added relative to the object to be added, and adding the object to be added to an object instance array corresponding to the first level-one partition and a sequence number array corresponding to the first level-two partition to obtain an updated scene data structure.
In some embodiments, the scene management unit 630 is further configured to:
acquiring a range to be queried;
determining a second level partition in the target scene according to the range to be queried;
acquiring a target object instance and object instance data corresponding to the target object instance, which are included in the second-level partition, according to a scene data structure corresponding to the target scene;
acquiring object data of an object to which the target object instance belongs according to the object data structure;
determining distribution data of the target object instance according to object instance data corresponding to the target object instance and object data of an object to which the target object instance belongs, wherein the distribution data is used for describing position information and a distribution range of the target object instance;
and determining whether the range to be queried comprises the target object instance according to the distribution data of the target object instance.
In some embodiments, the scene management unit 630 is further configured to:
obtaining an object instance identifier of an object instance to be deleted;
and deleting the information of the object instance to be deleted from the object instance table.
In some embodiments, as shown in fig. 23, the embodiment of the present application further provides an electronic device 700, including a processor 701, a memory 702, and a computer program stored in the memory 702 and capable of running on the processor 701, where the program when executed by the processor 701 implements the respective processes of the above-mentioned embodiment of the scene data processing method, and the same technical effects are achieved, so that repetition is avoided, and no further description is given here.
It should be noted that, the electronic device in the embodiment of the present application may be, for example, a mobile electronic device, or may also be a non-mobile electronic device.
In some embodiments of the present application, the processor may include, but is not limited to:
a general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
In some embodiments of the present application, the memory includes, but is not limited to:
volatile memory and/or nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct memory bus RAM (DR RAM).
In some embodiments of the present application, the computer program may be partitioned into one or more modules that are stored in the memory and executed by the processor to perform the methods provided herein. The one or more modules may be a series of computer program instruction segments capable of performing the specified functions, which are used to describe the execution of the computer program in the electronic device.
In some embodiments, the electronic device may further include:
a transceiver connectable to the processor or the memory.
The processor may control the transceiver to communicate with other devices, and in particular, may send information or data to other devices, or receive information or data sent by other devices. The transceiver may include a transmitter and a receiver. The transceiver may further include antennas, the number of which may be one or more.
It will be appreciated that the various components in the electronic device are connected by a bus system that includes, in addition to a data bus, a power bus, a control bus, and a status signal bus.
The present application also provides a computer storage medium having stored thereon a computer program which, when executed by a computer, enables the computer to perform the method of the above-described method embodiments. Alternatively, embodiments of the present application also provide a computer program product comprising instructions which, when executed by a computer, cause the computer to perform the method of the method embodiments described above.
When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces, in whole or in part, a flow or function consistent with embodiments of the present application. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a digital video disc (digital video disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
It will be appreciated that in the specific implementation of the present application, when the above embodiments of the present application are applied to specific products or technologies and relate to data related to user information and the like, user permission or consent needs to be obtained, and the collection, use and processing of the related data needs to comply with the relevant laws and regulations and standards.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. For example, functional modules in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
The foregoing is merely a specific implementation of the embodiments of the present application, but the protection scope of the embodiments of the present application is not limited thereto, and any person skilled in the art may easily think about changes or substitutions within the technical scope of the embodiments of the present application, and all changes and substitutions are included in the protection scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (32)

1. A scene data processing method for processing data in a virtual scene, the method comprising:
obtaining an object data structure, wherein the object data structure is used for indicating at least one object type and object data respectively corresponding to at least one object, the object type comprises at least one of a point object, a region object and a voxel object, and the object data is used for indicating the distribution range of the object;
acquiring a scene data structure corresponding to a target scene, wherein the target scene is divided into N primary blocks, each primary block is divided into M secondary blocks, and the scene data structure is used for indicating at least one of object instance data in the target scene, primary block data in the target primary block in the target scene and secondary block data in the secondary block of the target primary block;
and managing the target scene according to the scene data structure and the object data structure.
2. The method of claim 1, wherein if the object type is a point object, object data of the point object is null;
if the object type is a region object, in the case that the region object is a three-dimensional region object, the object data of the three-dimensional region object includes coordinates of one point on each of a plurality of surfaces constituting the three-dimensional region object, a normal vector of each surface, and a direction of each corresponding region, and in the case that the region object is a two-dimensional region object, the object data of the two-dimensional region object includes coordinates of two end points on each of a plurality of line segments constituting the two-dimensional region object, and a direction of each line segment corresponding region;
If the object type is a voxel object, the object data of the voxel object comprises at least one of the size of the voxel object, the number X of voxel blocks included by the voxel object and X voxel block data, and X is a positive integer.
3. The method of claim 1, wherein the scene data structure comprises at least one of:
the object instance table corresponding to the target scene is used for indicating information of object instances included in the target scene;
the object instance array corresponding to each first-level block in the target first-level block is used for indicating information of object instances included in the first-level block;
and the sequence number array corresponding to the second-level partition in each first-level partition is used for indicating the sequence number of the object instance data included in the second-level partition in the object instance array corresponding to the first-level partition.
4. A method according to claim 3, wherein the object instance table comprises: the object identification of the object to which each object instance belongs and the transformation data of each object instance relative to the object to which each object instance belongs are included in the target scene, wherein the transformation data of each object instance relative to the object to which each object instance belongs comprises at least one of a starting position coordinate of each object instance, a scaling factor of each object instance relative to the object to which each object instance belongs, and a rotation vector of each object instance relative to the object to which each object instance belongs.
5. The method of claim 3, wherein the set of object instance arrays corresponding to the primary partition comprises: and the object instance identifier in the object instance array points to one object instance in the object instance table.
6. The method of claim 5, wherein if the primary partition includes a region object instance, the object instance data is used to indicate a set of faces or line segments located in a secondary partition of the primary partition from a plurality of faces or line segments included in the region object instance;
if a primary segment includes a voxel object instance, the object instance data is used for indicating a start index of a voxel block located in the primary segment in a set of voxel blocks included in the voxel object instance, or the object instance data is used for indicating voxel block data located in a secondary segment of the primary segment in the set of voxel blocks included in the voxel object instance.
7. The method of claim 1, wherein the acquiring the object data structure comprises:
Acquiring an object file, wherein the object file is used for storing description information of at least one object and object data of each object in the at least one object, the description information of the object comprises an object type, and the object data is used for describing a distribution range of the object;
and analyzing the object file to obtain the object data structure.
8. The method of claim 7, wherein the description information of the at least one object includes an object identifier corresponding to each object, an object type, and a starting offset of the object data of each object in the object file.
9. The method according to claim 1, wherein the obtaining a scene data structure corresponding to the target scene includes:
acquiring a scene file, wherein the scene file is used for storing at least one of object instance data in a target scene, primary block data in primary blocks in the target scene and secondary block data of secondary blocks in the primary blocks;
and analyzing the scene file to obtain a scene data structure corresponding to the target scene.
10. The method of claim 9, wherein the scene file comprises at least one of:
A file header for storing at least one of description information of the target scene and an offset of other parts in the scene file;
an object instance table corresponding to the target scene;
the block description information of the target scene is used for indicating index data of a first-level block in the scene file, wherein the index data of the first-level block comprises at least one of a starting offset of the block data of the first-level block and a length of the block data of the first-level block;
the object scene includes block data information of the object scene, the block data of the first-level block being used for indicating block data of a first-level block in the object scene, wherein the block data of the first-level block includes at least one of first-level block data and second-level block data of the first-level block, the first-level block data being used for indicating object instance data included in the first-level block, and the second-level block data of the first-level block being used for indicating object instance data included in the second-level block of the first-level block.
11. The method of claim 10, wherein the description information of the target scene includes at least one of:
The method comprises the steps of starting a position of the target scene, the size of a first-level block in the target scene, the number of first-level blocks in the target scene, and the number of segments in the target scene, wherein one segment comprises at least one first-level block.
12. The method of claim 10, wherein the tile description information of the target scene includes at least one of:
hierarchical data corresponding to each of the L hierarchies in the target scene, segmented data corresponding to each of the segments in the hierarchy, and index data corresponding to each of the first-level segments in the segments;
the target scene is divided into L layers, each layer comprises at least one segment, each segment comprises at least one primary segment, and L is a positive integer.
13. The method of claim 12, wherein the hierarchically corresponding hierarchical data includes at least one of:
the method comprises the steps of layering an index, a minimum first-level block index in layering, a maximum first-level block index in layering, the number of first-level blocks in layering, the number S of segments in layering, a first index data offset corresponding to layering and a first block data offset corresponding to layering, wherein the first index data offset is used for indicating the initial offset of segment data corresponding to the first segment in layering in the block description information, and the first block data offset is used for indicating the initial offset of the block data included in layering in the block data information.
14. The method of claim 12, wherein the segment data corresponding to the segment comprises at least one of:
the first-level block index corresponding to the segment, the second index data offset corresponding to the segment, and the second block data offset corresponding to the segment are included in the segment, wherein the second index data offset corresponding to the segment is used for indicating the initial offset of the index data corresponding to the first-level block in the segment relative to the segment data corresponding to the segment, and the second block data offset corresponding to the segment is used for indicating the initial offset of the block data corresponding to the segment relative to the block data included in the segment belonging to the layer.
15. The method of claim 12, wherein the index data corresponding to each level of the segments comprises at least one of:
the segments comprise first-level block indexes in the segments corresponding to the first-level blocks;
the third block data offset corresponding to the first-level block index in each segment is used for indicating the initial offset of the block data included in the first-level block corresponding to the first-level block index in the segment relative to the block data included in the segment;
The first level chunk index in each segment corresponds to the size of the chunk data included in the first level chunk.
16. The method of claim 10, wherein the primary chunk data corresponding to the primary chunk includes at least one of the following information:
the number P of object instances included in the primary partition;
the data length in the first-level block data;
description information of each object instance in the P object instances, wherein the description information comprises an object instance identification and an offset of object instance data in the primary partition data;
and the object instance data corresponding to each object instance.
17. The method of claim 16, wherein if the primary partition includes a region object instance, the object instance data includes a set of faces or line segments located in a secondary partition of the primary partition from a plurality of faces or line segments included in the region object instance;
if the first level partition includes voxel object instances, the object instance data is of a first type or a second type, and the object instance data of the first type includes: the voxel object instance comprises a start index of a voxel block in the first-level partition in a set of voxel blocks, and the second-type object quantity data comprises: the voxel object instance comprises voxel block data located in a secondary partition of the primary partition in a set of voxel blocks.
18. The method of claim 17, wherein the object instance data further includes type indication information for indicating whether the type of the object instance data is the first type or the second type.
19. The method of claim 10, wherein the secondary partition data corresponding to the secondary partition in the primary partition includes at least one of the following information:
the primary block comprises a secondary block quantity Q;
the type indication information of the second-level block data corresponding to each second-level block of the Q second-level blocks is used for indicating that the type of the second-level block data is a third type or a fourth type, wherein the second-level block data of the third type comprises a second-level block index corresponding to the second-level block and an instance serial number corresponding to the second-level block, the fourth type comprises a serial number array corresponding to the second-level block, and the serial number array corresponding to the second-level block is used for indicating the serial number of the object instance data included by the second-level block in the object instance array included by the first-level block;
the offset of the secondary partition data corresponding to each of the Q secondary partitions;
And each secondary partition of the Q secondary partitions corresponds to secondary partition data.
20. The method of claim 19, wherein the Q secondary partitions include all secondary partitions in the primary partition or only secondary partitions with object instance data.
21. The method according to any one of claims 10-20, wherein the parsing the scene file to obtain a data structure corresponding to the target scene includes:
determining a target layering of the target first-level partitioning according to a first-level partitioning index corresponding to the target first-level partitioning and layering data corresponding to L layering in the scene file;
acquiring the number S of segments in the target hierarchy, a first index data offset corresponding to the target hierarchy and a first block data offset corresponding to the target hierarchy according to the segment data corresponding to the segments in the target hierarchy;
reading segment data corresponding to S segments at the position of the first index data offset from the segment description information, and obtaining segment data corresponding to each segment in the S segments, wherein the segment data comprises a first-level segment index, a second index data offset corresponding to the segment and a second segment data offset corresponding to the segment;
Determining a target segment where the target first-level block is located according to a starting first-level block index included in each segment and an intra-layer index corresponding to the target first-level block in the target hierarchy;
reading index data corresponding to K first-level blocks of a second index data offset position corresponding to the target segment from the first index data offset position, wherein K is the number of first-level blocks included in the target segment;
determining a first-level block index of the target first-level block in a segment in the target segment according to a first-level block index included in the target segment and an intra-layer index corresponding to the target first-level block in the target layer;
determining a target first-level block index corresponding to the target first-level block in the target segment according to the first-level block index in the segment corresponding to the target first-level block;
acquiring index data corresponding to the target first-level block index, wherein the index data comprises a third block data offset and length information of block data;
taking the sum of the first block data offset, the second block data offset and the third block data offset as a corresponding target offset of the block data of the target first-level block in the block data information;
Acquiring the block data included in the target first-level block from the block data information according to the target offset and the length information of the block data;
and determining the first-level block data corresponding to the target first-level block and the second-level block data corresponding to the second-level block in the target first-level block according to the block data included in the target first-level block.
22. The method according to any one of claims 1-20, wherein said managing the target scene according to the scene data structure and the object data structure comprises:
acquiring the starting position and the distribution range of an object instance to be added;
determining a first secondary partition block for adding the object instance to be added into a first primary partition block in the target scene according to the starting position and the distribution range of the object instance to be added, wherein the object instance to be added at least partially overlaps with the first secondary partition block;
acquiring object data of an object to which the object to be added belongs from the object data structure;
determining transformation data of the object to be added relative to the object to be added according to the starting position and the distribution range of the object to be added and the object data of the object to be added;
And adding the object to be added to the object instance table based on the transformation data of the object to be added relative to the object to be added, and adding the object to be added to an object instance array corresponding to the first level-one partition and a sequence number array corresponding to the first level-two partition to obtain an updated scene data structure.
23. The method according to any one of claims 1-20, wherein said managing the target scene according to the scene data structure and the object data structure comprises:
acquiring a range to be queried;
determining a second level partition in the target scene according to the range to be queried;
acquiring a target object instance and object instance data corresponding to the target object instance, which are included in the second-level partition, according to a scene data structure corresponding to the target scene;
acquiring object data of an object to which the target object instance belongs according to the object data structure;
determining distribution data of the target object instance according to object instance data corresponding to the target object instance and object data of an object to which the target object instance belongs, wherein the distribution data is used for describing position information and a distribution range of the target object instance;
And determining whether the range to be queried comprises the target object instance according to the distribution data of the target object instance.
24. The method according to any one of claims 1-20, wherein said managing the target scene according to the scene data structure and the object data structure comprises:
obtaining an object instance identifier of an object instance to be deleted;
and deleting the information of the object instance to be deleted from the object instance table.
25. A scene data processing method for processing data in a virtual scene, the method comprising:
acquiring an object instance data structure corresponding to a physical scene, wherein the object instance data structure comprises an object instance identifier, an object identifier of an object to which the object belongs and object instance data, which are respectively corresponding to at least one object instance in the physical scene, and the object instance data are used for describing position information and distribution range of the object instance;
processing the object instance data structure to obtain an object data structure, wherein the object data structure comprises at least one object type and object data corresponding to at least one object respectively, the object type comprises at least one of a point object, a region object and a voxel object, and the object data is used for indicating the distribution range of the object;
Adding the at least one object instance to a target scene according to the object instance data structure;
dividing the target scene, and determining a scene data structure corresponding to the target scene according to the distribution of object instances in the divided target scene, wherein the scene data structure is used for indicating at least one of object instance data in the target scene, primary partition data in a target primary partition in the target scene and secondary partition data in a secondary partition which further divides the target primary partition; and
and encoding the scene data structure and the object data structure.
26. The method of claim 25, wherein in the object instance data structure, if the object instance is a point object instance, the object instance data of the point object instance is coordinate information of the point object instance;
the object instance is a region object instance, and the object instance data of the region object instance is a set of a plurality of faces or a plurality of line segments composing the region object instance;
the object instance is a voxel object instance, and the object instance data of the voxel object instance is a set of a plurality of voxel blocks constituting the voxel object instance.
27. The method of claim 25, wherein the partitioning the target scene comprises:
dividing the target scene according to a preset first-level block size to obtain N first-level blocks, wherein N is a positive integer;
and performing secondary partition on the N primary partitions according to the partition level of the secondary partitions and the number of object instances included in each primary partition, wherein the partition level of the secondary partitions is used for indicating a threshold value of the number of object instances included in one secondary partition.
28. The method according to any one of claims 25-27, wherein the obtaining an object instance data structure corresponding to a physical scene comprises:
acquiring object instance data corresponding to point object instances or object instance data corresponding to regional object instances in the physical scene;
and carrying out gridding treatment on the object instance data corresponding to the point object instance or the object instance data corresponding to the area object instance to obtain the object instance data of the voxel object.
29. A scene data processing apparatus for processing data in a virtual scene, the apparatus comprising:
A first obtaining unit, configured to obtain an object data structure, where the object data structure is configured to indicate an object type and object data corresponding to at least one object, where the object type includes at least one of a point object, a region object, and a voxel object, and the object data is configured to indicate a distribution range of the object;
a second obtaining unit, configured to obtain a scene data structure corresponding to a target scene, where the target scene is divided into N primary partitions, each primary partition is divided into M secondary partitions, and the scene data structure is configured to indicate at least one of object instance data in the target scene, primary partition data in a target primary partition in the target scene, and secondary partition data in a secondary partition in the target primary partition;
and the scene management unit is used for managing the target scene according to the scene data structure and the object data structure.
30. A scene data processing apparatus for processing data in a virtual scene, the apparatus comprising:
the system comprises an acquisition unit, a storage unit and a distribution unit, wherein the acquisition unit is used for acquiring an object instance data structure corresponding to a physical scene, the object instance data structure comprises an object instance identifier, an object identifier and object instance data of an object to which at least one object instance in the physical scene corresponds respectively, and the object instance data is used for describing the position information and the distribution range of the object instance;
The data processing unit is used for processing the object instance data structure to obtain an object data structure, wherein the object data structure comprises at least one object type and object data corresponding to at least one object respectively, the object type comprises at least one of a point object, a region object and a voxel object, and the object data is used for indicating the distribution range of the object;
an adding unit, configured to add the at least one object instance to a target scene according to the object instance data structure;
a scene management unit, configured to divide the target scene, and determine a scene data structure corresponding to the target scene according to a distribution of object instances in the divided target scene, where the scene data structure is configured to indicate at least one of object instance data in the target scene, primary partition data in a target primary partition in the target scene, and secondary partition data in a secondary partition of the target primary partition;
and the encoding unit is used for encoding the scene data structure and the object data structure.
31. An electronic device, comprising:
One or more processors;
a memory for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the scene data processing method of any of claims 1 to 24, or to implement the scene data processing method of any of claims 25 to 28.
32. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program is executed by a processor to implement the scene data processing method of any of claims 1 to 24, or to implement the scene data processing method of any of claims 25 to 28.
CN202311248750.6A 2023-09-25 2023-09-25 Scene data processing method, device, electronic equipment and storage medium Pending CN117315114A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311248750.6A CN117315114A (en) 2023-09-25 2023-09-25 Scene data processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311248750.6A CN117315114A (en) 2023-09-25 2023-09-25 Scene data processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117315114A true CN117315114A (en) 2023-12-29

Family

ID=89236588

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311248750.6A Pending CN117315114A (en) 2023-09-25 2023-09-25 Scene data processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117315114A (en)

Similar Documents

Publication Publication Date Title
Cignoni et al. BDAM—Batched Dynamic Adaptive Meshes for high performance terrain visualization
Zhu et al. Lossless point cloud geometry compression via binary tree partition and intra prediction
WO2023124842A1 (en) Lod-based bim model lightweight construction and display method
KR100233972B1 (en) Compression of simple geotric models using spanning trees
Schilling et al. Using glTF for streaming CityGML 3D city models
Gurung et al. SQuad: Compact representation for triangle meshes
US20070182734A1 (en) Adaptive Quadtree-based Scalable Surface Rendering
DE112018004584T5 (en) DENSITY COORDINATE HASHING FOR VOLUMETRIC DATA
CN101976468B (en) Method and system for visualizing multiresolution dynamic landform
WO2012096790A2 (en) Planetary scale object rendering
US20200118301A1 (en) Conversion of infrastructure model geometry to a tile format
US11238641B2 (en) Architecture for contextual memories in map representation for 3D reconstruction and navigation
CN116860905B (en) Space unit coding generation method of city information model
CN114820975B (en) Three-dimensional scene simulation reconstruction system and method based on all-element parameter symbolization
Rodriguez et al. Compression-domain seamless multiresolution visualization of gigantic triangle meshes on mobile devices
KR102592986B1 (en) Context modeling of occupancy coding for point cloud coding
Danovaro et al. Level-of-detail for data analysis and exploration: A historical overview and some new perspectives
Kim et al. Utilizing extended geocodes for handling massive three-dimensional point cloud data
CN117315114A (en) Scene data processing method, device, electronic equipment and storage medium
US11893691B2 (en) Point cloud geometry upsampling
Platings et al. Compression of Large‐Scale Terrain Data for Real‐Time Visualization Using a Tiled Quad Tree
Koca et al. A hybrid representation for modeling, interactive editing, and real-time visualization of terrains with volumetric features
CN113849495A (en) Point cloud dynamic hash partitioning method and device
KR20210042569A (en) Apparatus and method for constructing space information
Sarton et al. State‐of‐the‐art in Large‐Scale Volume Visualization Beyond Structured Data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination