CN110689606B - Method and terminal for calculating raindrop falling position in virtual scene - Google Patents

Method and terminal for calculating raindrop falling position in virtual scene Download PDF

Info

Publication number
CN110689606B
CN110689606B CN201910788134.7A CN201910788134A CN110689606B CN 110689606 B CN110689606 B CN 110689606B CN 201910788134 A CN201910788134 A CN 201910788134A CN 110689606 B CN110689606 B CN 110689606B
Authority
CN
China
Prior art keywords
coordinates
camera
calculating
depth information
transformation matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910788134.7A
Other languages
Chinese (zh)
Other versions
CN110689606A (en
Inventor
林进浔
黄明炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Shuboxun Information Technology Co ltd
Original Assignee
Fujian Shuboxun Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Shuboxun Information Technology Co ltd filed Critical Fujian Shuboxun Information Technology Co ltd
Priority to CN201910788134.7A priority Critical patent/CN110689606B/en
Publication of CN110689606A publication Critical patent/CN110689606A/en
Application granted granted Critical
Publication of CN110689606B publication Critical patent/CN110689606B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

According to the method and the terminal for calculating the raindrop falling position in the virtual scene, the depth information of the preset point shielding piece in the virtual scene is rendered on the preset target, so that a depth information graph is obtained; determining coordinates of preset points in a current rendering picture, and acquiring a first transformation matrix of a current scene camera from a screen space to a world space; according to the depth information map, the coordinates of preset points and the first transformation matrix, the position of the raindrops falling in the world space is obtained through calculation, so that the calculation of a large number of collision detections required by each original frame is reduced, the calculation amount and the performance cost are reduced, in addition, the error is easy to influence due to the fact that the positions of the previous frame and the next frame are required to be considered for the conventional collision detection, the position of the raindrops falling in the world space is calculated in a mode of calculating the depth information, the influence of the raindrop speed is not required to be considered, and the accuracy is improved.

Description

Method and terminal for calculating raindrop falling position in virtual scene
Technical Field
The invention relates to the field of computer graphic images, in particular to a method and a terminal for calculating a raindrop falling position in a virtual scene.
Background
Computer graphics (Computer Graphics, CG for short) is a science of using mathematical algorithms to transform two-or three-dimensional graphics into a grid form for a computer display. Briefly, the main study of computer graphics is to study how graphics are represented in a computer and the related principles and algorithms for computing, processing and displaying the graphics with the computer.
In the raining simulation in the virtual scene, the position of the collision point is usually detected by adopting a collision detection mode to calculate the position of the raining place, so that a large amount of particles are needed, the operation amount is large, the precision is low, under the traditional collision detection scheme, the particles need to be mounted with a collision box, in the rendering of each frame, thousands of particles need to be intersected with blocking objects for calculation, the operation is complex, the performance cost is high, and the operation cannot be performed under a mobile platform.
Disclosure of Invention
First, the technical problem to be solved
In order to solve the problems in the prior art, the invention provides a method and a terminal for calculating the falling position of a raindrop in a virtual scene, which can reduce the operation amount and the performance cost and improve the accuracy.
(II) technical scheme
In order to achieve the above purpose, the technical scheme adopted by the invention comprises the following steps:
a calculation method of a raindrop falling position in a virtual scene comprises the following steps:
s1, rendering depth information of a preset point shielding piece in a virtual scene onto a preset target to obtain a depth information map;
s2, determining coordinates of preset points in a current rendering picture, and acquiring a first transformation matrix of a current scene camera from a screen space to a world space;
and S3, calculating according to the depth information map, coordinates of preset points and the first transformation matrix to obtain the falling position of the raindrops in the world space.
In order to achieve the above object, another technical solution adopted by the present invention includes:
a computing terminal for a raindrop landing location in a virtual scene, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the program:
s1, rendering depth information of a preset point shielding piece in a virtual scene onto a preset target to obtain a depth information map;
s2, determining coordinates of preset points in a current rendering picture, and acquiring a first transformation matrix of a current scene camera from a screen space to a world space;
and S3, calculating according to the depth information map, coordinates of preset points and the first transformation matrix to obtain the falling position of the raindrops in the world space.
(III) beneficial effects
The invention has the beneficial effects that: the method comprises the steps of rendering depth information of a preset point shielding piece in a virtual scene to a preset target to obtain a depth information map; determining coordinates of preset points in a current rendering picture, and acquiring a first transformation matrix of a current scene camera from a screen space to a world space; according to the depth information map, the coordinates of preset points and the first transformation matrix, the position of the raindrops falling in the world space is obtained through calculation, so that the calculation of a large number of collision detections required by each original frame is reduced, the calculation amount and the performance cost are reduced, in addition, the error is easy to influence due to the fact that the positions of the previous frame and the next frame are required to be considered for the conventional collision detection, the position of the raindrops falling in the world space is calculated in a mode of calculating the depth information, the influence of the raindrop speed is not required to be considered, and the accuracy is improved.
Drawings
FIG. 1 is a flowchart of a method for calculating a raindrop falling position in a virtual scene according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a computing terminal for a raindrop falling position in a virtual scene according to an embodiment of the present invention.
[ reference numerals description ]
1: a computing terminal for the falling position of the raindrops in the virtual scene;
2: a memory;
3: a processor.
Detailed Description
The invention will be better explained by the following detailed description of the embodiments with reference to the drawings.
Example 1
Referring to fig. 1, a method for calculating a raindrop falling position in a virtual scene includes the steps of:
s1, rendering depth information of a preset point shielding piece in a virtual scene onto a preset target to obtain a depth information map;
the step S1 specifically includes:
s11, adding a first camera into a virtual scene, and rendering a preset point shielding piece in the virtual scene to a preset target through the first camera to obtain a depth information map.
The step S11 specifically includes:
s111, adding a first camera into a virtual scene, and calculating a second transformation matrix of the first camera from world space to projection space according to parameters of the first camera and an orthogonal projection formula;
s112, starting a depth test, calculating the coordinates of the vertexes of the raindrop shields in the projection space of the first camera through the coordinates of the vertexes of the second transformation matrix and the preset point shields in the world space, and transmitting the coordinates to a coloring device;
and S113, the color device stores coordinates of the vertexes of the raindrop shields in the projection space of the first camera as depth information through colors, and a depth information map is obtained.
S2, determining coordinates of preset points in a current rendering picture, and acquiring a first transformation matrix of a current scene camera from a screen space to a world space;
the step S2 specifically comprises the following steps:
s21, determining the size of a current rendering picture, and establishing a coordinate system to obtain coordinates of preset points;
s22, acquiring a first transformation matrix of the current scene camera from the screen space to the world space.
And S3, calculating according to the depth information map, coordinates of preset points and the first transformation matrix to obtain the falling position of the raindrops in the world space.
The step S3 specifically comprises the following steps:
s31, calculating to obtain the coordinates of the preset points in a screen space under a first camera according to the coordinates of the preset points and the first transformation matrix;
s32, randomly selecting a plurality of coordinate values from the coordinates of the preset points in the screen space under the first camera to serve as screen coordinates of the raindrops, and calculating the screen coordinates corresponding to the raindrops with the minimum depth values from the screen coordinates of all the raindrops according to the depth information map;
s33, calculating to obtain an inverse matrix of the second transformation matrix according to the second transformation matrix;
s34, obtaining world coordinates according to screen coordinates corresponding to the raindrops with the minimum depth value and the inverse matrix.
Example two
The difference between the present embodiment and the first embodiment is that the present embodiment will further explain, with reference to a specific application scenario, how the method for calculating the position of the raindrops falling in the virtual scenario of the present invention is implemented:
s1, rendering depth information of a preset point shielding piece in a virtual scene onto a preset target to obtain a depth information map;
specifically, the preset target is a single rendering target;
the step S1 specifically includes:
s11, adding a first camera into a virtual scene, and rendering a preset point shielding piece in the virtual scene to a preset target through the first camera to obtain a depth information map.
The step S11 specifically includes:
s111, adding a first camera into a virtual scene, and calculating a second transformation matrix of the first camera from world space to projection space according to parameters of the first camera and an orthogonal projection formula;
specifically, the parameters of the first camera include the position, the direction, the near-platform distance, the far-plane distance and the view port size of the first camera in world space, the position of the first camera is set as the position of a raining starting point, the rotation is set as the raining direction, the near-plane is set to 0.1, the far-plane is set as the maximum height of raining, and the view port size is set as the size of a rectangular area requiring raining in the virtual scene;
s112, starting a depth test, calculating the coordinates of the vertexes of the raindrop shields in the projection space of the first camera through the coordinates of the vertexes of the second transformation matrix and the preset point shields in the world space, and transmitting the coordinates to a coloring device;
specifically, a depth test is started in a rendering state to ensure that an object with the smallest depth value can be drawn to the forefront, and the depth values of other objects are replaced in a preset target;
and S113, the color device stores coordinates of the vertexes of the raindrop shields in the projection space of the first camera as depth information through colors, and a depth information map is obtained.
Specifically, in the vertex shader, the positions of all the vertices of the rain drop shielding piece model in the virtual scene in the projection space of the first camera need to be calculated;
where POS0 is a parameter of a four-dimensional component, the four components are defined as (x, y, z, w), and a DEPTH value is obtained by dividing the z component of POS0 by the w component, and is denoted as DEPTH. In rendering, the value range of the DEPTH is-1 to 1, and the DEPTH information is required to be stored through colors, so that the range of the DEPTH information is required to be converted into 0 to 1, the method is that the DEPTH is 0.5+0.5, the value is stored into a DEPTH information graph as a final DEPTH value, and the finally stored DEPTH value is recorded as the DEPTH0;
s2, determining coordinates of preset points in a current rendering picture, and acquiring a first transformation matrix of a current scene camera from a screen space to a world space;
the step S2 specifically comprises the following steps:
s21, determining the size of a current rendering picture, and establishing a coordinate system to obtain coordinates of preset points;
s22, acquiring a first transformation matrix of the current scene camera from the screen space to the world space.
Specifically, the SIZE of the current rendered picture needs to be confirmed first and recorded as SCREEN_SIZE, so that the upper left corner coordinate of the SCREEN is (0, 0), and the upper right corner coordinate of the SCREEN is (SCREEN_SIZE.x, 0), and a first transformation matrix from the SCREEN space of the current scene camera to the world space is obtained;
and S3, calculating according to the depth information map, coordinates of preset points and the first transformation matrix to obtain the falling position of the raindrops in the world space.
The step S3 specifically comprises the following steps:
s31, calculating to obtain the coordinates of the preset points in a screen space under a first camera according to the coordinates of the preset points and the first transformation matrix;
specifically, multiplying the upper left corner coordinate and the upper right corner coordinate by a first transformation matrix to obtain the position of the upper left corner coordinate in world space as LPOS and the position of the upper right corner coordinate in world space as RPOS, and then multiplying the LPOS and the RPOS by the first transformation matrix to obtain the coordinates of LSCREEN_POS and RSCREEN_POS in screen space under a first camera;
s32, randomly selecting a plurality of coordinate values from the coordinates of the preset points in the screen space under the first camera to serve as screen coordinates of the raindrops, and calculating the screen coordinates corresponding to the raindrops with the minimum depth values from the screen coordinates of all the raindrops according to the depth information map;
specifically, the values of the x components are randomly sampled in the LSCREEN_POS and the RSCREEN_POS, the number of sampling points is the number of raindrop places, the number of the raindrop places can be determined according to the size of the raindrops, the position of each sampling can be randomly carried out to achieve the effect of RANDOM drop point positions, the RANDOM points are recorded as RANDOM_POS, a DEPTH information graph is read from a GPU (graphics processing unit) to a CPU (Central processing unit), the x components of the RANDOM points are taken as indexes, and the minimum values in different ordinate coordinates are obtained from data in the DEPTH information graph and are taken as DEPTH_MIN;
s33, calculating to obtain an inverse matrix of the second transformation matrix according to the second transformation matrix;
s34, obtaining world coordinates according to screen coordinates corresponding to the raindrops with the minimum depth value and the inverse matrix.
Specifically, the position of the minimum DEPTH value depth_min in world space will be back calculated by the screen coordinates corresponding to the minimum DEPTH value depth_min and the inverse of the second transformation matrix. The screen coordinate corresponding to the minimum DEPTH value is a four-dimensional component coordinate, wherein the x component is the x component of DEPTH1, the y component is MIN_V, the z component is DEPTH_MIN, the w component is 1, the screen coordinate is converted into a section from 0 to 1 so as to calculate the back calculation from the projection space to the world space, MAT1 is multiplied by the screen coordinate, the world space position of the minimum DEPTH is POS1, and POS1 is the position of a raindrop place.
According to the method, the position of the raindrop place is accurately calculated by taking the scene depth map rendered by the first camera as a reference and performing matrix calculation between the camera rendering the game picture and the first camera according to the principle that the world space position is unchanged, and compared with a scheme of collision detection, a large amount of collision detection calculation per frame is reduced. In addition, in collision detection, the position of each frame and the next frame is affected by the rain speed, so that there may be an error, and the position of the clipping point cannot reach the accuracy in the present invention.
Example III
Referring to fig. 2, a computing terminal 1 for a raindrop falling position in a virtual scene includes a memory 2, a processor 3, and a computer program stored in the memory 2 and executable on the processor 3, wherein the processor 3 implements the steps in the first embodiment when executing the program.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent changes made by the specification and drawings of the present invention, or direct or indirect application in the relevant art, are included in the scope of the present invention.

Claims (4)

1. The method for calculating the raindrop falling position in the virtual scene is characterized by comprising the following steps:
s1, rendering depth information of a preset point shielding piece in a virtual scene onto a preset target to obtain a depth information map;
s2, determining coordinates of preset points in a current rendering picture, and acquiring a first transformation matrix of a current scene camera from a screen space to a world space;
s3, calculating to obtain the falling position of the raindrops in the world space according to the depth information map, the coordinates of preset points and the first transformation matrix;
the step S1 specifically includes:
s11, adding a first camera into a virtual scene, and rendering a preset point shielding piece in the virtual scene to a preset target through the first camera to obtain a depth information map;
the step S11 specifically includes:
s111, adding a first camera into a virtual scene, and calculating a second transformation matrix of the first camera from world space to projection space according to parameters of the first camera and an orthogonal projection formula;
s112, starting a depth test, calculating the coordinates of the vertexes of the raindrop shields in the projection space of the first camera through the coordinates of the vertexes of the second transformation matrix and the preset point shields in the world space, and transmitting the coordinates to a coloring device;
s113, the color device stores coordinates of the vertexes of the raindrop shields in the projection space of the first camera as depth information through colors, and a depth information image is obtained;
the step S3 specifically comprises the following steps:
s31, calculating to obtain the coordinates of the preset points in a screen space under a first camera according to the coordinates of the preset points and the first transformation matrix;
s32, randomly selecting a plurality of coordinate values from the coordinates of the preset points in the screen space under the first camera to serve as screen coordinates of the raindrops, and calculating the screen coordinates corresponding to the raindrops with the minimum depth values from the screen coordinates of all the raindrops according to the depth information map;
s33, calculating to obtain an inverse matrix of the second transformation matrix according to the second transformation matrix;
s34, obtaining world coordinates according to screen coordinates corresponding to the raindrops with the minimum depth value and the inverse matrix.
2. The method for calculating a raindrop falling position in a virtual scene according to claim 1, wherein step S2 specifically comprises:
s21, determining the size of a current rendering picture, and establishing a coordinate system to obtain coordinates of preset points;
s22, acquiring a first transformation matrix of the current scene camera from the screen space to the world space.
3. A computing terminal for a raindrop landing position in a virtual scene, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the following steps when executing the program:
s1, rendering depth information of a preset point shielding piece in a virtual scene onto a preset target to obtain a depth information map;
s2, determining coordinates of preset points in a current rendering picture, and acquiring a first transformation matrix of a current scene camera from a screen space to a world space;
s3, calculating to obtain the falling position of the raindrops in the world space according to the depth information map, the coordinates of preset points and the first transformation matrix;
the step S1 specifically includes:
s11, adding a first camera into a virtual scene, and rendering a preset point shielding piece in the virtual scene to a preset target through the first camera to obtain a depth information map;
the step S11 specifically includes:
s111, adding a first camera into a virtual scene, and calculating a second transformation matrix of the first camera from world space to projection space according to parameters of the first camera and an orthogonal projection formula;
s112, starting a depth test, calculating the coordinates of the vertexes of the raindrop shields in the projection space of the first camera through the coordinates of the vertexes of the second transformation matrix and the preset point shields in the world space, and transmitting the coordinates to a coloring device;
s113, the color device stores coordinates of the vertexes of the raindrop shields in the projection space of the first camera as depth information through colors, and a depth information image is obtained;
the step S3 specifically comprises the following steps:
s31, calculating to obtain the coordinates of the preset points in a screen space under a first camera according to the coordinates of the preset points and the first transformation matrix;
s32, randomly selecting a plurality of coordinate values from the coordinates of the preset points in the screen space under the first camera to serve as screen coordinates of the raindrops, and calculating the screen coordinates corresponding to the raindrops with the minimum depth values from the screen coordinates of all the raindrops according to the depth information map;
s33, calculating to obtain an inverse matrix of the second transformation matrix according to the second transformation matrix;
s34, obtaining world coordinates according to screen coordinates corresponding to the raindrops with the minimum depth value and the inverse matrix.
4. A computing terminal for a raindrop landing position in a virtual scene according to claim 3, wherein step S2 specifically comprises:
s21, determining the size of a current rendering picture, and establishing a coordinate system to obtain coordinates of preset points;
s22, acquiring a first transformation matrix of the current scene camera from the screen space to the world space.
CN201910788134.7A 2019-08-26 2019-08-26 Method and terminal for calculating raindrop falling position in virtual scene Active CN110689606B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910788134.7A CN110689606B (en) 2019-08-26 2019-08-26 Method and terminal for calculating raindrop falling position in virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910788134.7A CN110689606B (en) 2019-08-26 2019-08-26 Method and terminal for calculating raindrop falling position in virtual scene

Publications (2)

Publication Number Publication Date
CN110689606A CN110689606A (en) 2020-01-14
CN110689606B true CN110689606B (en) 2023-06-30

Family

ID=69108709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910788134.7A Active CN110689606B (en) 2019-08-26 2019-08-26 Method and terminal for calculating raindrop falling position in virtual scene

Country Status (1)

Country Link
CN (1) CN110689606B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657074A (en) * 2021-08-13 2021-11-16 杭州安恒信息技术股份有限公司 Linear text layout method in three-dimensional space, electronic device and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6014144A (en) * 1998-02-03 2000-01-11 Sun Microsystems, Inc. Rapid computation of local eye vectors in a fixed point lighting unit
US7692647B2 (en) * 2006-09-14 2010-04-06 Microsoft Corporation Real-time rendering of realistic rain
US11127194B2 (en) * 2016-10-25 2021-09-21 Sony Corporation Image processing apparatus and image processing method
US20180211434A1 (en) * 2017-01-25 2018-07-26 Advanced Micro Devices, Inc. Stereo rendering
CN108022283A (en) * 2017-12-06 2018-05-11 北京像素软件科技股份有限公司 Rainwater analogy method, device and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN110689606A (en) 2020-01-14

Similar Documents

Publication Publication Date Title
JP7386153B2 (en) Rendering methods and terminals that simulate lighting
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
JP6374982B2 (en) Improved graphics processing by tracking object and / or primitive identifiers
US9536333B2 (en) Method and apparatus for improved processing of graphics primitives
EP4213102A1 (en) Rendering method and apparatus, and device
US11074752B2 (en) Methods, devices and computer program products for gradient based depth reconstructions with robust statistics
CN110706324A (en) Method and device for rendering weather particles
US20120268464A1 (en) Method and device for processing spatial data
CN112734896B (en) Environment shielding rendering method and device, storage medium and electronic equipment
US20230033319A1 (en) Method, apparatus and device for processing shadow texture, computer-readable storage medium, and program product
CN114494570A (en) Rendering method and device of three-dimensional model, storage medium and computer equipment
CN115512025A (en) Method and device for detecting model rendering performance, electronic device and storage medium
CN110689606B (en) Method and terminal for calculating raindrop falling position in virtual scene
KR20170036419A (en) Graphics processing apparatus and method for determining LOD (level of detail) for texturing of graphics pipeline thereof
WO2022100059A1 (en) Data storage management method, object rendering method, and device
CN111667393B (en) Method and terminal for simulating raining in virtual scene
CN118154661A (en) Method, device, equipment and storage medium for opening degree analysis of three-dimensional space
CN114494564A (en) Normal map generating method, normal map generating device, electronic device and storage medium
CN113129420A (en) Ray tracing rendering method based on depth buffer acceleration
CN113126944B (en) Depth map display method, display device, electronic device, and storage medium
CN114832375A (en) Ambient light shielding processing method, device and equipment
US11087523B2 (en) Production ray tracing of feature lines
AU2023274149B2 (en) Method for 3D visualization of sensor data
CN117557711B (en) Method, device, computer equipment and storage medium for determining visual field
CN117058301B (en) Knitted fabric real-time rendering method based on delayed coloring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant