CN116935056A - Data processing method and device, storage medium and computer equipment - Google Patents

Data processing method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN116935056A
CN116935056A CN202210341798.0A CN202210341798A CN116935056A CN 116935056 A CN116935056 A CN 116935056A CN 202210341798 A CN202210341798 A CN 202210341798A CN 116935056 A CN116935056 A CN 116935056A
Authority
CN
China
Prior art keywords
point cloud
data
point
annotation
labeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210341798.0A
Other languages
Chinese (zh)
Inventor
陈利虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tusimple Technology Co Ltd
Original Assignee
Beijing Tusimple Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tusimple Technology Co Ltd filed Critical Beijing Tusimple Technology Co Ltd
Priority to CN202210341798.0A priority Critical patent/CN116935056A/en
Publication of CN116935056A publication Critical patent/CN116935056A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Generation (AREA)

Abstract

The application provides a data processing method, a data processing device, a storage medium and computer equipment, and relates to the technical field of data processing. The method comprises the following steps: performing sparse processing on the first point cloud to obtain a second point cloud; obtaining second labeling data corresponding to a second point cloud; and obtaining first labeling data corresponding to the first point cloud according to the second labeling data. According to the scheme, the sparse processing is carried out on the high-density point cloud, the point cloud after the sparse processing is marked, and then the marking data of the high-density point cloud is obtained according to the corresponding relation between the point clouds, so that the problems that the rendering efficiency is low or the loading cannot be carried out due to the fact that the data amount of the high-density point cloud is too large in marking can be solved, and the marking efficiency of the high-density point cloud is improved.

Description

Data processing method and device, storage medium and computer equipment
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and apparatus for processing point cloud data, a storage medium, and a computer device.
Background
In general, when the point cloud is marked, the point cloud rendering display is performed on the browser through the WebGL technology, so that a user can perform corresponding marking operation. Because the point cloud data only contains a plurality of point data, for the point cloud data with smaller scale, the point cloud data only needs to be rendered by using a surface or point mode in sequence. However, when the point cloud data with larger data volume is faced, due to performance limitation of the browser, more time is often consumed when the point cloud loading and rendering are performed, and situations such as blocking or even incapacity of loading can occur, so that the labeling efficiency is greatly influenced.
Disclosure of Invention
The embodiment of the application provides a point cloud data processing scheme to solve the problem of low efficiency in high-density point cloud labeling.
In order to achieve the above purpose, the application adopts the following technical scheme:
in a first aspect of an embodiment of the present application, there is provided a data processing method, including:
performing sparse processing on the first point cloud to obtain a second point cloud;
obtaining second annotation data corresponding to the second point cloud; and
and obtaining first labeling data corresponding to the first point cloud according to the second labeling data.
In a second aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a data processing method as described above.
In a third aspect of embodiments of the present application, there is provided a computer device comprising a memory having stored therein at least one machine executable instruction and a processor executing the at least one machine executable instruction to implement the data processing method as claimed in any one of claims 1 to 7.
According to the data processing scheme provided by the embodiment of the application, the sparse processing is carried out on the high-density point cloud, the point cloud after the sparse processing is marked, and then the marked data of the high-density point cloud is obtained according to the corresponding relation between the point clouds, so that the problems that the rendering efficiency is low or the loading cannot be carried out due to the overlarge data amount of the high-density point cloud during marking can be solved, and the marking efficiency of the high-density point cloud is improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the application, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a block diagram illustrating a data processing apparatus according to an exemplary embodiment;
FIG. 2 is a schematic diagram showing the architecture of a data processing apparatus according to an exemplary embodiment;
FIG. 3 is one of flowcharts illustrating a data processing method according to an exemplary embodiment
FIG. 4 is a second flowchart illustrating a data processing method according to an exemplary embodiment;
fig. 5a to 5f are schematic diagrams respectively showing actual application scenarios according to an exemplary embodiment.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the present disclosure, the term "plurality" means two or more, unless otherwise indicated. In this disclosure, the term "and/or" describes an association of associated objects, covering any and all possible combinations of the listed objects. The character "/" generally indicates that the context-dependent object is an "or" relationship.
In the present disclosure, unless otherwise indicated, the use of the terms "first," "second," and the like are used to distinguish similar objects and are not intended to limit their positional relationship, timing relationship, or importance relationship. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in other manners than those illustrated or otherwise described herein.
Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, system, article, or apparatus.
A Point Cloud (Point Cloud) is a set of points for each sample Point of the object surface obtained by a measuring instrument. Specifically, a point cloud obtained according to a laser measurement principle includes three-dimensional coordinates (XYZ) and laser reflection Intensity (Intensity); a point cloud obtained according to the photogrammetry principle, comprising three-dimensional coordinates (XYZ) and color information (RGB); the point cloud is obtained by combining laser measurement and photogrammetry principles, and comprises three-dimensional coordinates (XYZ), laser reflection Intensity (Intensity) and color information (RGB). The point cloud also generally has attributes such as a point cloud density, a spatial resolution, and a point location precision, where the point cloud density refers to the number of point clouds in a certain space.
The point cloud sparse processing refers to processing the point cloud according to a certain rule to reduce the number of points and reduce the density of the point cloud. In general, the point cloud subjected to the sparse processing and the original point cloud have a certain corresponding relationship.
In the related art, an autonomous vehicle is inevitably subjected to various environments during actual traveling, wherein accurate perception of environmental objects during traveling is critical to safe traveling of the autonomous vehicle, and training of a predictive algorithm according to the development of the current autonomous system requires reliance on a large number of data sets. Along with the increasing demands of algorithms on training data, the accuracy requirements of point cloud acquisition are also increased, and in order to express more accurate information, the original point cloud is often high in density, and if the original point cloud is directly provided for labeling personnel for labeling, the labeling efficiency can be influenced due to the performance limitation of a labeling client.
Some embodiments of the application provide a data processing scheme. Fig. 1 shows a structure of a data processing apparatus according to an embodiment of the present application, the apparatus 1 comprising a processor 11 and a memory 12.
In some embodiments, the memory 12 may be a storage device of various forms, such as a transitory or non-transitory storage medium. At least one machine executable instruction may be stored in memory 12, which upon execution by processor 11 implements the data processing methods provided by embodiments of the present application.
In some embodiments, the data processing apparatus 1 may be located at the server side. In other embodiments, the data processing apparatus 1 may also be located in a cloud server. In other embodiments, the data processing apparatus 1 may also be located in a client.
As shown in fig. 2, the data processing provided by the embodiment of the present application may include a front-end processing 13 and a back-end processing 14. The relevant three-dimensional point cloud frames and/or images are displayed by the front-end process 13 and relevant data or information input by the annotators is received, for example, the front-end process 13 may be a process implemented by a web page or a process implemented by a separate application interface. The back-end processing 14 performs corresponding data processing according to the related data and information received by the front-end processing 13. After the data processing is completed, the data processing apparatus 1 may further provide the labeling result to other processes or applications on the client, the server, and the cloud server.
When the three-dimensional point cloud is displayed, the three-dimensional point cloud can be displayed according to the designated display direction. The designated display direction may be a preset display direction or an input display direction. For example, in some embodiments, after the data processing apparatus reads a frame of three-dimensional point cloud, the frame of three-dimensional point cloud may be displayed according to a preset display direction. For another example, in some embodiments, when the annotator needs to carefully observe the scene or object expressed by the three-dimensional point cloud, a desired display direction may be selected and input, and the data processing device displays the three-dimensional point cloud according to the received display direction, so as to facilitate the annotator to observe and identify. A point cloud frame is understood to be a three-dimensional point cloud displayed from a particular direction.
A data processing method implemented by the data processing apparatus 1 executing at least one machine executable instruction is described below.
Fig. 3 shows a flow of data processing performed by a data processing apparatus according to an embodiment of the present application, where the flow includes:
s301, performing sparse processing on the first point cloud to obtain a second point cloud.
Specifically, the first point cloud may be subjected to sparse processing in a plurality of manners to reduce the density of the first point cloud, for example, a sparse algorithm based on a Potree, or a custom sparse rule, which is not limited herein. The second point cloud and the first point cloud obtained through the sparse processing have a certain corresponding relation, and the specific corresponding relation depends on an actually adopted sparse algorithm.
S303, second annotation data corresponding to the second point cloud is obtained.
Specifically, the labeling data can be input by a labeling person through a human-computer interaction interface provided by the data processing device. For example, specific parameter values are directly input in a data input box in the man-machine interaction interface, a preset button, a key on the man-machine interface is clicked, the button or the key has corresponding preset instructions or data, or corresponding options are selected in a pull-down menu provided by the man-machine interface, one or more sub-menus can be included in the pull-down menu, each sub-menu can include one or more options, and the data processing device receives labeling personnel through the man-machine interface
And the input labeling data is used for labeling the second point cloud.
And S305, obtaining first labeling data corresponding to the first point cloud according to the second labeling data.
Specifically, in the process of performing sparse processing on the first point cloud to obtain the second point cloud, points in the first point cloud and points in the second point cloud have corresponding relations, and after the labeling data of the second point cloud are obtained, the labeling data of the second point cloud can be converted into the labeling data of the first point cloud by combining the corresponding relations of the points in the first point cloud and the points in the second point cloud.
According to the method shown in fig. 3, the data processing device performs sparse processing on the first point cloud with larger data volume to obtain a second point cloud, obtains second labeling data corresponding to the second point cloud, and obtains first labeling data corresponding to the first point cloud according to the second labeling data. The original point cloud is subjected to sparse processing, so that the point cloud data quantity required to be processed in labeling is reduced, the point cloud rendering efficiency is improved, and the labeling data of the original point cloud is obtained according to the labeling data of the point cloud obtained through sparse processing, so that the labeling efficiency can be improved under the condition that the performance of labeling equipment is limited.
As shown in fig. 4, in one embodiment, the above step S301 may be implemented by:
s3011, dividing the data of the first point cloud to obtain at least one point cloud block; wherein each point cloud block comprises at least one point.
Specifically, a bounding box corresponding to the first point cloud may be first determined, where the bounding box contains the point cloud of the entity to be annotated. In general, we can consider a point cloud as a point in a certain space, where the points in the point cloud have three-dimensional coordinates, and then the bounding box of the first point cloud is a three-dimensional box containing all the points in the first point cloud. The processing procedures of dividing, determining a bounding box and the like are adopted, only for illustrating possible implementation manners of sparse processing, and should not be taken as limiting the scheme, in practical application, as long as the density of the original point cloud can be reduced, the labeling requirement can be met, and different sparse processing methods can be selected according to practical situations.
Then dividing the first space in the boundary box into a plurality of second spaces according to preset precision, and responding to at least one point in the second space for each second space, and forming the points in the second space into a point cloud block. Here, since the points in the point cloud have three-dimensional coordinates, when the first point cloud is divided, the points in a certain space are actually divided. When the first space in the bounding box is divided into a plurality of second spaces, the bounding box may be divided into a plurality of second spaces, and the plurality of second spaces may be obtained after the dividing, wherein the dividing may be performed for the length, width, height, or width of the bounding box. In consideration of the characteristics of the point cloud distribution and different choices of the segmentation accuracy, a second space which does not contain the points may exist in the segmented second space, and when the sparse processing is performed, the second space may not be processed, but only the second space which contains the points may be processed. At this time, whether a point exists in each second space may be determined, and when at least one point exists in a second space, the point in the second space is regarded as a point cloud block.
S3013 determines at least one target point for each of the at least one point cloud tile, respectively.
Specifically, at least one target point may be selected for each point cloud block according to a preset rule. After the above-mentioned segmentation processing, a plurality of point cloud blocks are obtained, and the determination of the target point can be performed for each point cloud block, so that the number of points in each point cloud block is reduced, the density of the point cloud can be reduced, and the data volume is reduced.
When determining the target points of each point cloud block, different numbers of target points can be determined according to constraint conditions such as precision requirements, system processing capacity and the like. Generally, if the above-mentioned segmentation is performed with higher precision, the number of the segmented second spaces is larger, and then fewer target points can be selected for each point cloud block. Specifically, when selecting a target point, a point that can represent the characteristics of the block where the target point is located is generally selected as much as possible. For example, if a point needs to be selected for each block, then a point may be determined for each block such that the sum of the distances of other points to the point is minimized; alternatively, the target point may be determined by calculating the center of gravity of points within the block. Further, since the number of points contained in each block may be different, each block may determine a different number of target points to better characterize different blocks. And are not limited thereto.
S3015, constructing a second point cloud according to the target point.
Specifically, after the target point is selected through the steps, the coordinates of the target point can be kept unchanged, and the selected target point forms a second point cloud; or, coordinate transformation may be performed on the target point according to a certain rule, and then the second point cloud may be constructed by using the target point after coordinate transformation.
According to the method, the data of the first point cloud are divided in sequence, the target point is selected for the point cloud block obtained after division, then the second point cloud is constructed by utilizing the target point cloud, and the density of the original point cloud can be reduced on the basis of keeping the characteristics of the original point cloud, so that the data volume is reduced, and the subsequent labeling processing is convenient.
In order to better explain the application of the scheme in practice, as shown in fig. 5, a scene of applying the scheme specifically is shown, and the object to be marked in the point cloud is the tail of a vehicle. In order to describe the implementation of the scheme of the application more conveniently, points related to the entity to be marked are mainly shown in the figure.
Fig. 5e shows the effect of labeling for a second point cloud. Fig. 5f illustrates a process of obtaining annotation data for a first point cloud from annotation data for a second point cloud.
In this embodiment, as shown in fig. 5a, a first point cloud and a bounding box thereof under a certain view angle are shown, the first point cloud includes point cloud data of a vehicle tail to be marked, and it can be seen that the density of the point cloud corresponding to the vehicle tail position is higher. Because the density of the first point cloud is high, the first point cloud needs to be subjected to sparse processing and then marked by marking personnel. Fig. 5b to 5d show a possible sparse processing manner, in which fig. 5b shows the effect of equally dividing the length and the height of the bounding box respectively, and at this time, the space in the bounding box is divided into a plurality of subspaces, and the point cloud in each subspace forms a point cloud block; fig. 5c shows a process of determining a target point for each point cloud block, where the target point is selected for the point cloud block obtained in the previous step, and one or more points are reserved for each point cloud block as the target point, so as to implement a density reduction process for the first point cloud, and finally obtain the second point cloud as shown in fig. 5 d. It can be seen that the second point cloud obtained by the sparse processing shown in fig. 5d has a density smaller than that of the first point cloud.
However, it should be noted that, in this embodiment, for the target point determined according to the first point cloud, coordinate transformation is not performed on the target point in the process of obtaining the second point cloud; meanwhile, fig. 5a to 5d are only for convenience of explaining the whole sparse processing process, and when the sparse processing is actually performed, the sparse processing may be performed not through a visual interface, but by means of the back-end processing 14, the second point cloud obtained by the front-end processing 13 is subjected to rendering display, so that labeling personnel can perform labeling operation.
Further, as shown in fig. 5e, after the first point cloud is subjected to sparse processing to obtain a second point cloud, for the second point cloud, front-end rendering may be performed through WebGL, and labeling personnel perform labeling to obtain labeling data of the second point cloud, where the labeling data may specifically include a labeling frame, object attributes, and the like. Therefore, as shown in fig. 5f, the annotation data of the first point cloud may be obtained according to the annotation data of the second point cloud, and in this embodiment, the second point cloud is directly formed by the target point after the target point is determined, so that the annotation data of the second point cloud is the annotation data of the first point cloud. However, it should be noted that if the coordinate transformation is performed on the target point according to the unified rule when the second point cloud is constructed, when the labeling data of the first point cloud is obtained according to the labeling data of the second point cloud, the coordinate of the second point cloud labeling frame needs to be subjected to the corresponding inverse transformation so as to obtain the coordinate of the labeling frame of the first point cloud.
In this embodiment, the sparse processing is performed on the original point cloud, then the sparse processed point cloud is rendered through the front end, so that labeling personnel can perform labeling operation on the sparse processed point cloud, and labeling data of the original point cloud is obtained according to the corresponding relationship between the original point cloud and the sparse processed point cloud. In the embodiment, the point cloud data volume required to be processed by front-end rendering is reduced through sparse processing, and the rendering efficiency is improved, so that the labeling efficiency can be improved, and the influence of the client performance on the labeling efficiency is reduced; meanwhile, as the sparse processed point cloud and the original point cloud have the corresponding relation, the annotation data of the corresponding original point cloud can be obtained through the annotation data of the sparse processed point cloud, and the annotation accuracy can be ensured on the basis of improving the annotation efficiency.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principles and embodiments of the present application have been described in detail with reference to specific examples, which are provided to facilitate understanding of the method and core ideas of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (9)

1. A data processing method, comprising:
performing sparse processing on the first point cloud to obtain a second point cloud;
obtaining second annotation data corresponding to the second point cloud; and
and obtaining first labeling data corresponding to the first point cloud according to the second labeling data.
2. The method of claim 1, the performing sparse processing on the first point cloud to obtain the second point cloud, comprising:
dividing the data of the first point cloud to obtain at least one point cloud block; each point cloud block comprises at least one point;
determining a target point for each of the at least one point cloud block, respectively; and
and constructing the second point cloud according to the target point.
3. The method of claim 2, wherein the partitioning the data of the first point cloud to obtain at least one point cloud block comprises:
determining a boundary box corresponding to the first point cloud, wherein the boundary box comprises an entity to be marked;
dividing the first space in the boundary frame into a plurality of second spaces according to preset precision; and
for each second space, in response to at least one point in the second space, forming the points in the second space into a point cloud block.
4. The method of claim 2, the separately determining target points for each of the at least one point cloud tile, comprising:
and determining at least one target point according to a preset rule for each point cloud block.
5. The method of claim 2, the constructing the second point cloud from the target point, comprising: and keeping the coordinates of the target points unchanged, wherein the target points form the second point cloud.
6. The method of claim 1, wherein the obtaining the second annotation data corresponding to the second point cloud includes:
displaying the second point cloud; and
and obtaining the second annotation data according to the annotation operation aiming at the second point cloud.
7. The method of claim 1, wherein coordinates of corresponding points in the first point cloud and the second point cloud remain unchanged, and the obtaining, according to the second annotation data, first annotation data corresponding to the first point cloud includes:
obtaining a second annotation frame of the second point cloud and corresponding object attribute information according to the second annotation data;
determining a first annotation frame of the first point cloud according to the coordinate information of the second annotation frame, wherein the coordinate information of the first annotation frame is consistent with that of the second annotation frame; and
and associating the corresponding object attribute information with the point in the first labeling frame to obtain the first labeling data.
8. A computer readable storage medium having stored thereon a computer program which when executed by a processor implements the data processing method of claims 1 to 7.
9. A computer device comprising a memory having at least one machine executable instruction stored therein and a processor executing the at least one machine executable instruction to implement the data processing method of any of claims 1 to 7.
CN202210341798.0A 2022-03-29 2022-03-29 Data processing method and device, storage medium and computer equipment Pending CN116935056A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210341798.0A CN116935056A (en) 2022-03-29 2022-03-29 Data processing method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210341798.0A CN116935056A (en) 2022-03-29 2022-03-29 Data processing method and device, storage medium and computer equipment

Publications (1)

Publication Number Publication Date
CN116935056A true CN116935056A (en) 2023-10-24

Family

ID=88381345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210341798.0A Pending CN116935056A (en) 2022-03-29 2022-03-29 Data processing method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN116935056A (en)

Similar Documents

Publication Publication Date Title
CN110675728B (en) Generation method, device and equipment of thermodynamic diagram and computer readable storage medium
EP4120199A1 (en) Image rendering method and apparatus, and electronic device and storage medium
CN109658481B (en) Image labeling method and device, and feature map generation method and device
US9697751B2 (en) Interactive representation of clusters of geographical entities
JP7156515B2 (en) Point cloud annotation device, method and program
KR101992044B1 (en) Information processing apparatus, method, and computer program
JP6480918B2 (en) Efficient contouring and gating in flow cytometry
CN115439609B (en) Three-dimensional model rendering method, system, equipment and medium based on map service
CN115908715A (en) Loading method and device of building information model, equipment and storage medium
CN115861609A (en) Segmentation labeling method of remote sensing image, electronic device and storage medium
US10313558B2 (en) Determining image rescale factors
CN114565722A (en) Three-dimensional model monomer realization method
CN113868494A (en) Big data visualization display system
JP2023178274A (en) Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions
CN116935056A (en) Data processing method and device, storage medium and computer equipment
CN112131626A (en) CAD model geometric feature interaction method and system for non-regional Engine
US9881210B2 (en) Generating a computer executable chart visualization by annotating a static image
KR101658852B1 (en) Three-dimensional image generation apparatus and three-dimensional image generation method
CN114708382A (en) Three-dimensional modeling method, device, storage medium and equipment based on augmented reality
JP2018040181A (en) Civil engineering structure finished shape evaluation system, finished shape evaluation method and program
KR20130101332A (en) Conversion system and method for 3d object represented by triangle mesh to 3d object represented by dosurface
JP2016126198A (en) Information processing unit, information processing method, program
JP2017052218A (en) Image processing device, image processing method and program
CN117272495B (en) Image and data organization, fusion loading and display method and system
CN113657010B (en) Meshing adjustment method and system for rigging model and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination