CN112580729A - Deep learning simulation data set manufacturing method based on CAD model - Google Patents

Deep learning simulation data set manufacturing method based on CAD model Download PDF

Info

Publication number
CN112580729A
CN112580729A CN202011544842.5A CN202011544842A CN112580729A CN 112580729 A CN112580729 A CN 112580729A CN 202011544842 A CN202011544842 A CN 202011544842A CN 112580729 A CN112580729 A CN 112580729A
Authority
CN
China
Prior art keywords
simulation
data set
cad model
deep learning
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011544842.5A
Other languages
Chinese (zh)
Inventor
李福东
陶显
赵家军
徐德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangzhou Blue State Digital Control Brush Equipment Co ltd
Yangzhou University
Original Assignee
Yangzhou Blue State Digital Control Brush Equipment Co ltd
Yangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangzhou Blue State Digital Control Brush Equipment Co ltd, Yangzhou University filed Critical Yangzhou Blue State Digital Control Brush Equipment Co ltd
Priority to CN202011544842.5A priority Critical patent/CN112580729A/en
Publication of CN112580729A publication Critical patent/CN112580729A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/40Software arrangements specially adapted for pattern recognition, e.g. user interfaces or toolboxes therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/12Shadow map, environment map

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a deep learning simulation data set manufacturing method based on a CAD model, which combines a target CAD model to generate a deep learning data set, inputs the deep learning data set into a neural network for training, realizes the rapid and stable positioning effect on actual workpiece pictures, is more flexible and efficient compared with the time and labor waste of manual on-site picture acquisition, and provides a more real-time and stable method for detecting and positioning a target workpiece. The method comprises two processes of generating the simulation picture and manufacturing the label, and compared with the prior art, the method has the advantages that: the method is provided aiming at the problem that the data set is difficult to obtain when the demand of industrial target positioning is realized by applying deep learning, a deep learning simulation workpiece data set is manufactured and realized by combining a three-dimensional visual virtual simulation technology based on OpenGL and a CAD model, compared with the field acquisition of pictures, the method saves more manpower and material resources, can generate a large number of training pictures in a short time, and has better positioning real-time performance and stronger stability.

Description

Deep learning simulation data set manufacturing method based on CAD model
Technical Field
The invention relates to a deep learning data set manufacturing method, in particular to a deep learning simulation data set manufacturing method based on a CAD model.
Background
OpenGL is actually a graphics-hardware interface, designed by SGI corporation, is a graphics generation tool software library designed for programmers, and, like other application program interfaces, is a simple, easy-to-use graphics production tool. There are other graphical application program interfaces, APIs, but OpenGl has evolved into an industry standard that is practically adopted by all computer hardware platform manufacturers, and is also widely used by software developers [1 ]. With the continuous improvement of the performance of computer hardware, the classification and detection algorithm based on deep learning shows great advantages in the key technical fields of image processing, target detection, tracking and positioning, real-time capture and the like [2], but because data sets of most application scenes are still relatively missing, data acquisition of part of the occasions is very difficult, and great difficulty is brought to the application of the deep learning industrial scenes. The three-dimensional visual image generation technology based on the simulation technology can achieve a very vivid effect, a data set generated through simulation is more flexible in colleges and universities, and a large amount of manpower and material resources are saved. Meanwhile, when pictures with certain toxicity or corrosion operation environments are collected, the safety and the stability can be better ensured.
[1] Houwinman, Chenwuhe, Majiahong, OpenGL-based virtual simulation experiment design [ J ]. laboratory research and exploration, 2019,38(06):89-92.
[2] The object detection research based on deep learning is reviewed in [ J ]. electronic newspaper, 2020,48(06):1230-1239.
Disclosure of Invention
The invention aims to solve the technical problems and provides a deep learning simulation data set manufacturing method based on a CAD model. The deep learning data set is generated by combining the CAD model of the target and is input into the neural network for training, so that the rapid and stable positioning effect of the actual workpiece picture is realized, and compared with the time and labor waste of manual on-site picture acquisition, the method is more flexible and efficient, and a more real-time and stable method is provided for the detection and positioning of the target workpiece.
In order to solve the technical problems, the technical scheme provided by the invention is as follows: a deep learning simulation data set manufacturing method based on a CAD model comprises two processes of generating simulation pictures and manufacturing labels:
the process of generating the simulation picture comprises the following steps:
step S1: setting texture and material characteristics corresponding to actual workpiece materials for the target CAD model by using blend software, wherein the texture and material characteristics comprise texture coordinate mapping, transparency, highlight, reflectivity and the like;
step S2: importing the manufactured workpiece simulation model into an OpenGL simulation environment, and opening settings such as a diffuse reflection mapping and a mirror light mapping;
step S3: acquiring a certain number of actual industrial application scene pictures of the target workpiece according to requirements, and setting the pictures as simulation backgrounds of simulation workpiece models;
step S4: adding proper ambient light, a directional light source with a certain angle and a dynamic point light source (the position of which is constantly changed in a three-dimensional space) into a simulation environment, wherein white light is adopted;
step S5: changing the number and the pose of the generated simulation workpieces, combining the dynamic light source in the step S3 and the simulation industrial background in the step S4, changing the position and the view angle of the virtual camera, intercepting and generating a large number of simulation pictures and storing the pictures;
the process for manufacturing the label comprises the following steps:
step S6: label making is carried out on the generated simulation picture by using LabelMe software, wherein the label making comprises 2D bounding box drawing and category labeling, and the label making is stored in a specific label format;
step S7: and (4) arranging each simulation picture and the corresponding label file thereof, dividing the simulation pictures into a training set, a verification set and a test set according to a certain proportion, and finishing the manufacture of the data set.
Further, step S1 is specifically to add a texture picture to the CAD model of the target workpiece in the column of "texture" in the blend software, and set texture coordinate mapping parameters in a pull-down menu; setting diffuse reflection parameters, highlight modes, coloring modes, mirror reflection parameters and the like for the CAD model in the column of 'material';
and exporting the set CAD model into an obj format, simultaneously generating an mtl file for storing information such as material and the like, and taking the mtl file and the texture picture as the input of an OpenGL graphic library.
Further, step S4 is specifically:
for ambient light, an ambient light constant is used, permanently giving the object some color. The color of the light is multiplied by a small constant environmental factor, multiplied by the color of the object, and the end result is then taken as the color of the segment.
For directional collimated light, the light direction vector is first inverted, the calculation requires a ray direction from the segment to the source, and the vector is normalized. Lightingshader setvec3 is used to define the direction of the light source.
For a dynamic point source, the following equation calculates the attenuation value as a function of the distance of the segment from the source:
Figure BDA0002855634110000021
where d represents the distance of the segment from the light source, 3 configurable terms: the constant term Kc, the primary term Kl, and the secondary term Kq, are stored in a defined Light structure. Calculating attenuation values according to a formula, and then multiplying the attenuation values by ambient light, diffuse reflection and mirror light components respectively; the dynamic position of the point light source over time is set using a sin (glfwGetTime ()) function.
Further, step S5 is specifically:
single and multiple simulation model models were created and drawn by ourShader.setMat4 and ourModel.Draw, using glm:: translate, glm:: scale, glm:: rotate to change the position, size and rotation of the models, respectively. Using glm radians to convert angles to radians, including rotational weight settings for the x, y, z components of the target object coordinate system. The world coordinate system is transformed to the viewing space coordinate system by the following lokat matrix:
Figure BDA0002855634110000031
further, step S6 is specifically:
drawing a 2D rectangular bounding box for each model of the simulation picture in LabelMe software, drawing by taking the target upper left corner (Xmin, Ymin) as a starting point and the target lower right corner (Xmax, Ymax) as an end point, adding the name of the belonging category, storing the name of the belonging category in a specific format such as an xml file and the like, wherein the content of the name of the file, the name of the picture, the resolution, the number of channels, the belonging category, the coordinate information of the bounding box and the like.
Compared with the prior art, the invention has the advantages that: the method provided by the invention is used for solving the problem that the acquisition of a data set is difficult when the demand of industrial target positioning is realized by applying deep learning, and the data set of a deep learning simulation workpiece is manufactured and realized by combining a three-dimensional visual virtual simulation technology based on OpenGL and a CAD model.
Drawings
FIG. 1 is a flow chart of simulation data set generation in the present invention.
FIG. 2 is a CAD model and a pictorial representation of a workpiece according to the present invention.
FIG. 3 is a sample diagram of a simulation workpiece according to the present invention.
FIG. 4 is a schematic diagram of the industrial background of the present invention.
FIG. 5 is a schematic view of a variation of the light source of the present invention.
FIG. 6 is a sample diagram of a simulation data set according to the present invention.
FIG. 7 is a schematic diagram of the label of the present invention.
FIG. 8 is a diagram illustrating the positioning effect of the actual workpiece in the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the description of the present invention, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it is to be noted that, unless otherwise explicitly specified or limited, the term "connected" is to be interpreted broadly, e.g. as a fixed connection, a detachable connection, or an integral connection; the specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
As shown in fig. 1, a method for producing a deep learning simulation data set based on a CAD model includes two processes of generating a simulation image and producing a label:
generating a simulation picture:
step S1: setting texture and material characteristics corresponding to actual workpiece materials for the target CAD model by using the blend software, wherein the texture and material characteristics comprise texture coordinate mapping, transparency, highlight, reflectivity and the like (shown in the attached figure 2);
step S2: importing the manufactured workpiece simulation model into an OpenGL simulation environment, and opening settings such as a diffuse reflection mapping and a mirror light mapping (shown in figure 3);
step S3: acquiring a certain number of actual industrial application scene pictures of the target workpiece according to requirements, and setting the pictures as simulation backgrounds of simulation workpiece models (shown in figure 4);
step S4: adding proper ambient light, a directional light source with a certain angle and a dynamic point light source (the position of which is constantly changed in a three-dimensional space) into a simulation environment, wherein white light is adopted (shown in figure 5);
step S5: changing the number and the pose of the generated simulation workpieces, combining the dynamic light source in the step S3 and the simulation industrial background in the step S4, changing the position and the view angle of the virtual camera, intercepting and generating a large number of simulation pictures and storing the pictures (figure 6);
and (3) making a label:
step S6: and labeling the generated simulation picture by using LabelMe software, wherein the simulation picture comprises 2D bounding box drawing and category labeling, and the simulation picture is stored in a specific label format (shown in the attached figure 7).
Step S7: and (4) arranging each simulation picture and the corresponding label file thereof, dividing the simulation pictures into a training set, a verification set and a test set according to a certain proportion, and finishing the manufacture of the data set.
Taking an aluminum alloy cylinder body model on industrial brush manufacturing equipment as an example:
step S1 specifically includes:
for example, the material parameters and the texture map of the simulation model of the aluminum alloy material of the cylinder block are as follows:
newmtl aluminum
Ns 96.078431
Ka 0.200000 0.200000 0.200000
Kd 0.500000 0.500000 0.500000
Ks 0.800000 0.800000 0.800000
Ke 0.000000 0.000000 0.000000
Ni 1.000000
d 1.000000
illum 2
map_Kd cylinder.png
map_Ks cylinder.png
after the introduction, the texture picture is converted into the forms of diffuse reflection mapping and specular light mapping (see fig. 3).
Step S4 specifically includes:
the directional light source light parameters include direction and intensity components:
ourShader.setVec3("dirLight.direction",-0.2f,-1.0f,-0.3f);
ourShader.setVec3("dirLight.ambient",0.2f,0.2f,0.2f);
ourShader.setVec3("dirLight.diffuse",0.5f,0.5f,0.5f);
ourShader.setVec3("dirLight.specular",1.0f,1.0f,1.0f);
the dynamic point light source comprises the following steps of setting a motion track, an intensity component and an attenuation coefficient:
lightPos.x=1.0f+sin(glfwGetTime())*3.0f;
lightPos.y=sin(glfwGetTime()/2.0f)*2.0f;
lightPos.z=1.0f+sin(glfwGetTime())*3.0f;
ourShader.setVec3("light.ambient",0.2f,0.2f,0.2f);
ourShader.setVec3("light.diffuse",0.5f,0.5f,0.5f);
ourShader.setVec3("light.specular",1.0f,1.0f,1.0f);
ourShader.setFloat("light.constant",1.0f);
ourShader.setFloat("light.linear",0.045f);
ourShader.setFloat("light.quadratic",0.0075f);
step S5 specifically includes:
in glm model, glm, radians (alpha), glm, vec3(k, m, n)), alpha can be adjusted to change the model pose, and k, m, n respectively correspond to the x, y, z axis rotation proportion weight of the model's own coordinate system; the camera position is initialized to camera (glm:: vec3(0.0f,0.0f,0.0f)), and can be changed in the window with keyboard input. The camera view angle can be freely changed in the three-dimensional simulation environment (see fig. 6).
Finally, the generated simulation data set is used for training the neural network, and the trained model is used for positioning and testing the actual workpiece, so that a real-time and stable positioning effect is achieved (see an attached figure 8).
The present invention and its embodiments have been described above, and the description is not intended to be limiting, and the drawings are only one embodiment of the present invention, and the actual structure is not limited thereto. In summary, those skilled in the art should appreciate that they can readily use the disclosed conception and specific embodiments as a basis for designing or modifying other structures for carrying out the same purposes of the present invention without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (5)

1. A deep learning simulation data set manufacturing method based on a CAD model is characterized in that: the method comprises two processes of generating a simulation picture and manufacturing a label:
the process of generating the simulation picture comprises the following steps:
step S1: setting texture and material characteristics corresponding to actual workpiece materials for the target CAD model by using blend software, wherein the texture and material characteristics comprise texture coordinate mapping, transparency, highlight, reflectivity and the like;
step S2: importing the manufactured workpiece simulation model into an OpenGL simulation environment, and opening settings such as a diffuse reflection mapping and a mirror light mapping;
step S3: acquiring a certain number of actual industrial application scene pictures of the target workpiece according to requirements, and setting the pictures as simulation backgrounds of simulation workpiece models;
step S4: adding proper ambient light, a directional light source with a certain angle and a dynamic point light source (the position of which is constantly changed in a three-dimensional space) into a simulation environment, wherein white light is adopted;
step S5: changing the number and the pose of the generated simulation workpieces, combining the dynamic light source in the step S3 and the simulation industrial background in the step S4, changing the position and the view angle of the virtual camera, intercepting and generating a large number of simulation pictures and storing the pictures;
the process for manufacturing the label comprises the following steps:
step S6: label making is carried out on the generated simulation picture by using LabelMe software, wherein the label making comprises 2D bounding box drawing and category labeling, and the label making is stored in a specific label format;
step S7: and (4) arranging each simulation picture and the corresponding label file thereof, dividing the simulation pictures into a training set, a verification set and a test set according to a certain proportion, and finishing the manufacture of the data set.
2. The method for producing a deep learning simulation data set based on a CAD model as recited in claim 1, wherein: the step S1 is specifically to add texture pictures to the CAD model of the target workpiece in the column of "texture" in the blend software, and set texture coordinate mapping parameters in a pull-down menu; setting diffuse reflection parameters, highlight modes, coloring modes, mirror reflection parameters and the like for the CAD model in the column of 'material';
and exporting the set CAD model into an obj format, simultaneously generating an mtl file for storing information such as material and the like, and taking the mtl file and the texture picture as the input of an OpenGL graphic library.
3. The method for producing a deep learning simulation data set based on a CAD model as recited in claim 1, wherein: step S4 specifically includes:
for ambient light, an ambient light constant is used, permanently giving the object some color. The color of the light is multiplied by a small constant environmental factor, multiplied by the color of the object, and the end result is then taken as the color of the segment.
For directional collimated light, the light direction vector is first inverted, the calculation requires a ray direction from the segment to the source, and the vector is normalized. Lightingshader setvec3 is used to define the direction of the light source.
For a dynamic point source, the following equation calculates the attenuation value as a function of the distance of the segment from the source:
Figure FDA0002855634100000021
where d represents the distance of the segment from the light source, 3 configurable terms: the constant term Kc, the primary term Kl, and the secondary term Kq, are stored in a defined Light structure. Calculating attenuation values according to a formula, and then multiplying the attenuation values by ambient light, diffuse reflection and mirror light components respectively; the dynamic position of the point light source over time is set using a sin (glfwGetTime ()) function.
4. The method for producing a deep learning simulation data set based on a CAD model as recited in claim 1, wherein: step S5 specifically includes:
single and multiple simulation model models were created and drawn by ourShader.setMat4 and ourModel.Draw, using glm:: translate, glm:: scale, glm:: rotate to change the position, size and rotation of the models, respectively. Using glm radians to convert angles to radians, including rotational weight settings for the x, y, z components of the target object coordinate system. The world coordinate system is transformed to the viewing space coordinate system by the following lokat matrix:
Figure FDA0002855634100000022
5. the method for producing a deep learning simulation data set based on a CAD model as recited in claim 1, wherein: step S6 specifically includes:
drawing a 2D rectangular bounding box for each model of the simulation picture in LabelMe software, drawing by taking the target upper left corner (Xmin, Ymin) as a starting point and the target lower right corner (Xmax, Ymax) as an end point, adding the name of the belonging category, storing the name of the belonging category in a specific format such as an xml file and the like, wherein the content of the name of the file, the name of the picture, the resolution, the number of channels, the belonging category, the coordinate information of the bounding box and the like.
CN202011544842.5A 2020-12-23 2020-12-23 Deep learning simulation data set manufacturing method based on CAD model Withdrawn CN112580729A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011544842.5A CN112580729A (en) 2020-12-23 2020-12-23 Deep learning simulation data set manufacturing method based on CAD model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011544842.5A CN112580729A (en) 2020-12-23 2020-12-23 Deep learning simulation data set manufacturing method based on CAD model

Publications (1)

Publication Number Publication Date
CN112580729A true CN112580729A (en) 2021-03-30

Family

ID=75139283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011544842.5A Withdrawn CN112580729A (en) 2020-12-23 2020-12-23 Deep learning simulation data set manufacturing method based on CAD model

Country Status (1)

Country Link
CN (1) CN112580729A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113963127A (en) * 2021-12-22 2022-01-21 深圳爱莫科技有限公司 Simulation engine-based model automatic generation method and processing equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113963127A (en) * 2021-12-22 2022-01-21 深圳爱莫科技有限公司 Simulation engine-based model automatic generation method and processing equipment

Similar Documents

Publication Publication Date Title
CN111783525B (en) Aerial photographic image target sample generation method based on style migration
Luna Introduction to 3D game programming with DirectX 10
DeFanti et al. Visualization: Expanding scientific and engineering research opportunities
US5644689A (en) Arbitrary viewpoint three-dimensional imaging method using compressed voxel data constructed by a directed search of voxel data representing an image of an object and an arbitrary viewpoint
CN112509151A (en) Method for generating sense of reality of virtual object in teaching scene
EP0526881A2 (en) Three-dimensional model processing method, and apparatus therefor
JPH06231275A (en) Picture simulation method
CN104537705A (en) Augmented reality based mobile platform three-dimensional biomolecule display system and method
Manovich The automation of sight: from photography to computer vision
CN112580729A (en) Deep learning simulation data set manufacturing method based on CAD model
CN111999030A (en) Three-dimensional oil flow VR (virtual reality) online measurement and display system and working method thereof
Rohe An Optical Test Simulator Based on the Open-Source Blender Software.
Akman et al. Sweeping with all graphical ingredients in a topological picturebook
CN102724535A (en) Displaying method of stereo-scanning 3D (three-dimensional) display
CN116958332B (en) Method and system for mapping 3D model in real time of paper drawing based on image recognition
Muhammadiyev et al. Types of computer graphics and their practical importance in human life
Stacy Computer-aided light sheet flow visualization using photogrammetry
Fumin Design of an Electric Vehicle Modeling and Visualization System Based on Industrial CT and Mixed Reality Technology
Burris TRANSFORMATIONS IN COMPUTER GRAPHICS
Zhang et al. Storage optimization method based on region of interest in part pose recognition
Tor SARM: a computer graphics simulator for generic robot manipulators
Babii Use of augmented reality to build an interactive interior on the ios mobile platform
CN116778051A (en) Laser interference image simulation method, device and storage medium
Papadopoulos Algebraic transformations in computer graphics
Limaye et al. Personal computer-based visualization of three-dimensional scalar and vector fields: An application to molecular graphics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210330