CN112750188B - Method and terminal for automatically rendering object - Google Patents

Method and terminal for automatically rendering object Download PDF

Info

Publication number
CN112750188B
CN112750188B CN201911034055.3A CN201911034055A CN112750188B CN 112750188 B CN112750188 B CN 112750188B CN 201911034055 A CN201911034055 A CN 201911034055A CN 112750188 B CN112750188 B CN 112750188B
Authority
CN
China
Prior art keywords
rendering
camera
world coordinate
rendered
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911034055.3A
Other languages
Chinese (zh)
Other versions
CN112750188A (en
Inventor
刘德建
谢曦
林琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian TQ Digital Co Ltd
Original Assignee
Fujian TQ Digital Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian TQ Digital Co Ltd filed Critical Fujian TQ Digital Co Ltd
Priority to CN201911034055.3A priority Critical patent/CN112750188B/en
Publication of CN112750188A publication Critical patent/CN112750188A/en
Application granted granted Critical
Publication of CN112750188B publication Critical patent/CN112750188B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method and a terminal for automatically rendering objects, which are used for acquiring three-dimensional world coordinates of all vertex positions in a model to be rendered, recording the three-dimensional world coordinates in a world coordinate vertex array, automatically determining the placement position and the view angle of a rendering camera according to the world coordinate vertex array, calling a rendering command, rendering the model to be rendered according to the placement position and the view angle of the rendering camera, and rendering all objects one by one according to the steps until all objects are processed; according to the method and the device, the three-dimensional world coordinates of all the vertex positions in each model to be rendered are obtained, and then the placement positions and the view angles of the rendering cameras are determined, so that the rendering of the model to be rendered is completed, namely, the positions and the angles of the cameras can be automatically set, quick batch rendering is realized, and a large amount of manpower and time are saved.

Description

Method and terminal for automatically rendering object
Technical Field
The invention relates to the technical field of three-dimensional modeling, in particular to a method and a terminal for automatically rendering objects.
Background
Today, 3D images are increasingly used, and 3D software is often used to create various virtual object models for display, such as props in electronic games, 3D work displays, and product model displays. In actual use, it is sometimes necessary to batch render a large number of models at a time, and each model requires a plurality of different angles. Since the size and shape of each model are different during the rendering process, the positions of the models may be different, and thus it is necessary to manually set the position and angle of the camera when each model is rendered, which takes a lot of manpower and time.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: the method and the terminal for automatically rendering the object can automatically set the position and the angle of the camera so as to realize quick rendering.
In order to solve the technical problems, the invention adopts the following technical scheme:
a method of automatically rendering an object, comprising the steps of:
s1, acquiring three-dimensional world coordinates of all vertex positions in a model to be rendered, and recording the three-dimensional world coordinates in a world coordinate vertex array;
s2, automatically determining the placement position and the view angle of the rendering camera according to the world coordinate vertex array;
s3, calling a rendering command, and rendering the model to be rendered according to the placement position and the view angle of the rendering camera;
and S4, rendering all the objects one by one according to the steps S1 to S3 until all the objects are processed.
In order to solve the technical problems, the invention adopts another technical scheme that:
a terminal for automatically rendering an object, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
s1, acquiring three-dimensional world coordinates of all vertex positions in a model to be rendered, and recording the three-dimensional world coordinates in a world coordinate vertex array;
s2, automatically determining the placement position and the view angle of the rendering camera according to the world coordinate vertex array;
s3, calling a rendering command, and rendering the model to be rendered according to the placement position and the view angle of the rendering camera;
and S4, rendering all the objects one by one according to the steps S1 to S3 until all the objects are processed.
The invention has the beneficial effects that: a method and a terminal for automatically rendering objects are provided, wherein three-dimensional world coordinates of all vertex positions in each model to be rendered are obtained, and then the placement position and the view angle of a rendering camera are determined so as to complete the rendering of the model to be rendered, namely, the position and the angle of the camera can be automatically set so as to realize quick rendering, thereby saving a great deal of manpower and time.
Drawings
FIG. 1 is a flow chart of a method for automatically rendering objects according to an embodiment of the invention;
FIG. 2 is a schematic diagram illustrating a method for automatically rendering objects according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a terminal for automatically rendering an object according to an embodiment of the present invention.
Description of the reference numerals:
1. a terminal for automatically rendering an object; 2. a processor; 3. a memory.
Detailed Description
In order to describe the technical contents, the achieved objects and effects of the present invention in detail, the following description will be made with reference to the embodiments in conjunction with the accompanying drawings.
Referring to fig. 1 to 2, a method for automatically rendering an object includes the steps of:
s1, acquiring three-dimensional world coordinates of all vertex positions in a model to be rendered, and recording the three-dimensional world coordinates in a world coordinate vertex array;
s2, automatically determining the placement position and the view angle of the rendering camera according to the world coordinate vertex array;
s3, calling a rendering command, and rendering the model to be rendered according to the placement position and the view angle of the rendering camera;
and S4, rendering all the objects one by one according to the steps S1 to S3 until all the objects are processed.
From the above description, the beneficial effects of the invention are as follows: the three-dimensional world coordinates of all vertex positions in each model to be rendered are obtained, and then the placement positions and the view angles of the rendering cameras are determined so as to finish the rendering of the model to be rendered, namely, the positions and the angles of the cameras can be automatically set so as to realize quick rendering, thereby saving a great deal of manpower and time.
Further, the step S2 specifically includes:
s21, generating a world coordinate bounding box of the model to be rendered according to the world coordinate vertex array, and acquiring the diagonal length of the world coordinate bounding box, wherein the world coordinate bounding box comprises three-dimensional world coordinates of all vertex positions of the model to be rendered;
s22, traversing the world coordinate vertex array, and converting the world coordinate vertex array into a screen space coordinate system corresponding to the rendering camera to obtain a screen coordinate vertex array;
s23, generating a screen coordinate bounding box of the model to be rendered according to the screen coordinate vertex array, acquiring a three-dimensional screen coordinate of a central position of the screen coordinate bounding box, and converting the three-dimensional screen coordinate of the central position into a three-dimensional world coordinate, wherein the screen coordinate bounding box comprises three-dimensional screen coordinates of all vertex positions of the model to be rendered;
s24, calculating and obtaining a placement position of the rendering camera, wherein the placement position of the rendering camera is equal to a three-dimensional world coordinate + object distance of the central position, the object distance = standard object distance is the diagonal length of the world coordinate bounding box, and the camera position vector points to the direction of the rendering camera position from the central position;
and S25, traversing the world coordinate vertex array, and calculating the included angles between vectors from all vertex positions of the world coordinate vertex array to the placement positions of the rendering cameras and the camera position vectors so as to obtain the view angles of the rendering cameras.
From the above description, the placement position and the view angle of the rendering camera can be obtained quickly and accurately.
Further, the step S25 specifically includes:
directing a z-axis direction of the rendering camera toward the center position;
traversing the world coordinate vertex array, and calculating the included angles between vectors from all vertex positions of the world coordinate vertex array to the placement positions of the rendering cameras and the camera position vectors to obtain the maximum included angles;
and calculating and obtaining the view angle of the rendering camera, wherein the view angle of the rendering camera is = (maximum included angle + margin angle) 2, and the margin angle is used for controlling the margin from the object to the edge of the picture.
From the above description, the z-axis direction of the rendering camera is oriented to the center position, so that the whole model to be rendered is exactly displayed in the center of the screen range of the rendering camera, and the view angle of the rendering camera is obtained more quickly and conveniently.
Further, the step S3 specifically includes:
s31, setting a background color of the rendering camera;
s32, creating a render texture object of the rendering camera;
s33, calling a rendering command, and rendering the model to be rendered according to the placement position and the view angle of the rendering camera;
s34, reading picture pixel information from the render Texture object of the rendering camera to the Texture2D object, and inputting the picture pixel information into the PNG file through the PNG encoder to complete the rendering process.
From the above description, it can be seen that a better technical solution for rendering is provided, in which the background color is used as the background of the rendering chart, and can be adjusted according to the actual requirement, and the renderTexture is used for saving the rendered screen image for subsequent calling.
Further, the step S4 specifically includes: and rendering each model to be rendered of the model list to be rendered one by one according to the steps S1 to S3 until all objects are processed.
As can be seen from the above description, all the models to be rendered are placed in one list, and then the models to be rendered in the list are sequentially obtained to complete rendering of all the models to be rendered.
Referring to fig. 3, a terminal for automatically rendering an object includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the following steps when executing the computer program:
s1, acquiring three-dimensional world coordinates of all vertex positions in a model to be rendered, and recording the three-dimensional world coordinates in a world coordinate vertex array;
s2, automatically determining the placement position and the view angle of the rendering camera according to the world coordinate vertex array;
s3, calling a rendering command, and rendering the model to be rendered according to the placement position and the view angle of the rendering camera;
and S4, rendering all the objects one by one according to the steps S1 to S3 until all the objects are processed.
From the above description, the beneficial effects of the invention are as follows: the three-dimensional world coordinates of all vertex positions in each model to be rendered are obtained, and then the placement positions and the view angles of the rendering cameras are determined so as to finish the rendering of the model to be rendered, namely, the positions and the angles of the cameras can be automatically set so as to realize quick rendering, thereby saving a great deal of manpower and time.
Further, the step S2 specifically includes:
s21, generating a world coordinate bounding box of the model to be rendered according to the world coordinate vertex array, and acquiring the diagonal length of the world coordinate bounding box, wherein the world coordinate bounding box comprises three-dimensional world coordinates of all vertex positions of the model to be rendered;
s22, traversing the world coordinate vertex array, and converting the world coordinate vertex array into a screen space coordinate system corresponding to the rendering camera to obtain a screen coordinate vertex array;
s23, generating a screen coordinate bounding box of the model to be rendered according to the screen coordinate vertex array, acquiring a three-dimensional screen coordinate of a central position of the screen coordinate bounding box, and converting the three-dimensional screen coordinate of the central position into a three-dimensional world coordinate, wherein the screen coordinate bounding box comprises three-dimensional screen coordinates of all vertex positions of the model to be rendered;
s24, calculating and obtaining a placement position of the rendering camera, wherein the placement position of the rendering camera is equal to a three-dimensional world coordinate + object distance of the central position, the object distance = standard object distance is the diagonal length of the world coordinate bounding box, and the camera position vector points to the direction of the rendering camera position from the central position;
and S25, traversing the world coordinate vertex array, and calculating the included angles between vectors from all vertex positions of the world coordinate vertex array to the placement positions of the rendering cameras and the camera position vectors so as to obtain the view angles of the rendering cameras.
From the above description, the placement position and the view angle of the rendering camera can be obtained quickly and accurately.
Further, the step S25 specifically includes:
directing a z-axis direction of the rendering camera toward the center position;
traversing the world coordinate vertex array, and calculating the included angles between vectors from all vertex positions of the world coordinate vertex array to the placement positions of the rendering cameras and the camera position vectors to obtain the maximum included angles;
and calculating and obtaining the view angle of the rendering camera, wherein the view angle of the rendering camera is = (maximum included angle + margin angle) 2, and the margin angle is used for controlling the margin from the object to the edge of the picture.
From the above description, the z-axis direction of the rendering camera is oriented to the center position, so that the whole model to be rendered is exactly displayed in the center of the screen range of the rendering camera, and the view angle of the rendering camera is obtained more quickly and conveniently.
Further, the step S3 specifically includes:
s31, setting a background color of the rendering camera;
s32, creating a render texture object of the rendering camera;
s33, calling a rendering command, and rendering the model to be rendered according to the placement position and the view angle of the rendering camera;
s34, reading picture pixel information from the render Texture object of the rendering camera to the Texture2D object, and inputting the picture pixel information into the PNG file through the PNG encoder to complete the rendering process.
From the above description, it can be seen that a better technical solution for rendering is provided, in which the background color is used as the background of the rendering chart, and can be adjusted according to the actual requirement, and the renderTexture is used for saving the rendered screen image for subsequent calling.
Further, the step S4 specifically includes: and rendering each model to be rendered of the model list to be rendered one by one according to the steps S1 to S3 until all objects are processed.
As can be seen from the above description, all the models to be rendered are placed in one list, and then the models to be rendered in the list are sequentially obtained to complete rendering of all the models to be rendered.
Referring to fig. 1 to 2, a first embodiment of the present invention is as follows:
a method of automatically rendering an object, comprising the steps of:
s1, acquiring three-dimensional world coordinates of all vertex positions in a model to be rendered, and recording the three-dimensional world coordinates in a world coordinate vertex array wVertexList;
s2, automatically determining a placement position wCamposition and a view angle fov of a rendering camera cam according to a world coordinate vertex array wVertexList;
in this embodiment, as can be seen from fig. 2, the step S2 specifically includes the following steps:
s21, generating a world coordinate bounding box wBoundingBox of the model to be rendered according to a world coordinate vertex array wVertexList, acquiring the diagonal length of the world coordinate bounding box wBoundingBox, wherein the world coordinate bounding box wBoundingBox comprises three-dimensional world coordinates of all vertex positions of the model to be rendered, and the diagonal length is recorded in wbSize and is used as a scalar of the model size;
s22, traversing the world coordinate vertex array wVertexList, and converting the world coordinate vertex array wVertexList into a screen space coordinate system corresponding to the rendering camera cam to obtain a screen coordinate vertex array sVertexList;
s23, generating a screen coordinate bounding box sBoundingBox of the model to be rendered according to a screen coordinate vertex array sVertexList, acquiring a three-dimensional screen coordinate of a central position of the screen coordinate bounding box sBoundingBox, converting the three-dimensional screen coordinate of the central position into a three-dimensional world coordinate, wherein the screen coordinate bounding box sBoundingBox comprises three-dimensional screen coordinates of all vertex positions of the model to be rendered, and the three-dimensional world coordinate of the central position is recorded in a wCenter;
s24, calculating and obtaining a placement position wCamposition of the rendering camera, wherein the placement position wCamposition of the rendering camera is equal to a three-dimensional world coordinate wCenter+object distance of a central position, a camera position vector camDirect, object distance = standard object distance, a diagonal length wbSize of a world coordinate bounding box, and the camera position vector camDirect points to a direction of the rendering camera cam position from the central position;
s25, traversing the world coordinate vertex array wVertexList from the z-axis direction of the rendering camera cam towards the center position wCenter, calculating the included angles between vectors from all vertex positions of the world coordinate vertex array wVertexList to the placement position wCamposition of the rendering camera cam and the camera position vector camDirection to obtain a maximum included angle maxAngle, calculating and obtaining a view angle fov of the rendering camera cam, wherein the view angle fov = (the maximum included angle maxAngle+margin angle) is 2, the margin angle margin is used for controlling the margin from an object to the edge of a picture, and the larger the value of the margin angle margin is, the more margin is left around the rendering image object;
after the placement position wCamPosition and the view angle fov of the rendering camera cam are obtained, the rendering camera cam is placed at the placement position wCamPosition, and the model to be rendered with the view angle fov is rendered;
s3, calling a rendering command, and rendering the model to be rendered according to the placement position and the view angle of the rendering camera cam;
in this embodiment, the step S3 specifically includes the following steps:
s31, setting a background color of a rendering camera cam, wherein the value of the background color can be adjusted according to actual requirements;
s32, creating a render text object of the rendering camera cam, wherein the render text object is used as a rendering target map of the rendering camera cam and used for recording rendering pictures;
s33, calling a rendering command, and rendering the model to be rendered according to the placement position and the view angle of the rendering camera cam;
s34, reading picture pixel information from a render Texture object of a rendering camera cam to a Texture2D object, and inputting the picture pixel information into a PNG file through a PNG encoder to complete a rendering process;
and S4, rendering all the objects one by one according to the steps S1 to S3 until all the objects are processed.
In this embodiment, the step S4 specifically includes the following steps:
and rendering each model to be rendered of the model list to be rendered object list one by one according to the steps S1 to S3 until all objects are processed.
Referring to fig. 3, a second embodiment of the present invention is as follows:
a terminal 1 for automatically rendering objects, comprising a memory 3, a processor 2 and a computer program stored on the memory 3 and executable on the processor 2, the processor 2 implementing the steps of the first embodiment described above when executing the computer program.
In summary, according to the method and terminal for automatically rendering objects provided by the invention, three-dimensional world coordinates of all vertex positions in each model to be rendered are obtained, then three-dimensional screen coordinates of a screen coordinate system are obtained, according to the obtained diagonal length and center position, and the preset camera position vector and margin angle, the placement position and view angle of the rendering camera can be quickly and conveniently obtained, the rendering camera is placed at the placement position, the models to be rendered in the view angle are rendered, and then the rendering is sequentially performed according to the sequence of the list of the models to be rendered, so that a batch of rendering of the models to be rendered is completed, namely, the position and angle of the camera can be automatically set, so that quick batch rendering is realized, and a great amount of manpower and time are saved.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent changes made by the specification and drawings of the present invention, or direct or indirect application in the relevant art, are included in the scope of the present invention.

Claims (8)

1. A method of automatically rendering an object, comprising the steps of:
s1, acquiring three-dimensional world coordinates of all vertex positions in a model to be rendered, and recording the three-dimensional world coordinates in a world coordinate vertex array;
s2, automatically determining the placement position and the view angle of the rendering camera according to the world coordinate vertex array;
the step S2 specifically comprises the following steps:
s21, generating a world coordinate bounding box of the model to be rendered according to the world coordinate vertex array, and acquiring the diagonal length of the world coordinate bounding box, wherein the world coordinate bounding box comprises three-dimensional world coordinates of all vertex positions of the model to be rendered;
s22, traversing the world coordinate vertex array, and converting the world coordinate vertex array into a screen space coordinate system corresponding to the rendering camera to obtain a screen coordinate vertex array;
s23, generating a screen coordinate bounding box of the model to be rendered according to the screen coordinate vertex array, acquiring a three-dimensional screen coordinate of a central position of the screen coordinate bounding box, and converting the three-dimensional screen coordinate of the central position into a three-dimensional world coordinate, wherein the screen coordinate bounding box comprises three-dimensional screen coordinates of all vertex positions of the model to be rendered;
s24, calculating and obtaining a placement position of the rendering camera, wherein the placement position of the rendering camera is equal to a three-dimensional world coordinate + object distance of the central position, the object distance = standard object distance is the diagonal length of the world coordinate bounding box, and the camera position vector points to the direction of the rendering camera position from the central position;
s25, traversing the world coordinate vertex array, and calculating the included angles between vectors from all vertex positions of the world coordinate vertex array to the placement positions of the rendering cameras and the camera position vectors to obtain the view angles of the rendering cameras;
s3, calling a rendering command, and rendering the model to be rendered according to the placement position and the view angle of the rendering camera;
and S4, rendering all the objects one by one according to the steps S1 to S3 until all the objects are processed.
2. The method for automatically rendering objects according to claim 1, wherein the step S25 is specifically:
directing a z-axis direction of the rendering camera toward the center position;
traversing the world coordinate vertex array, and calculating the included angles between vectors from all vertex positions of the world coordinate vertex array to the placement positions of the rendering cameras and the camera position vectors to obtain the maximum included angles;
and calculating and obtaining the view angle of the rendering camera, wherein the view angle of the rendering camera is = (maximum included angle + margin angle) 2, and the margin angle is used for controlling the margin from an object to the edge of the picture.
3. The method for automatically rendering objects according to claim 1, wherein the step S3 is specifically:
s31, setting a background color of the rendering camera;
s32, creating a render texture object of the rendering camera;
s33, calling a rendering command, and rendering the model to be rendered according to the placement position and the view angle of the rendering camera;
s34, reading picture pixel information from the render Texture object of the rendering camera to the Texture2D object, and inputting the picture pixel information into the PNG file through the PNG encoder to complete the rendering process.
4. The method for automatically rendering objects according to claim 1, wherein the step S4 is specifically: and rendering each model to be rendered of the model list to be rendered one by one according to the steps S1 to S3 until all objects are processed.
5. A terminal for automatically rendering an object, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the following steps when executing the computer program:
s1, acquiring three-dimensional world coordinates of all vertex positions in a model to be rendered, and recording the three-dimensional world coordinates in a world coordinate vertex array;
s2, automatically determining the placement position and the view angle of the rendering camera according to the world coordinate vertex array;
the step S2 specifically comprises the following steps:
s21, generating a world coordinate bounding box of the model to be rendered according to the world coordinate vertex array, and acquiring the diagonal length of the world coordinate bounding box, wherein the world coordinate bounding box comprises three-dimensional world coordinates of all vertex positions of the model to be rendered;
s22, traversing the world coordinate vertex array, and converting the world coordinate vertex array into a screen space coordinate system corresponding to the rendering camera to obtain a screen coordinate vertex array;
s23, generating a screen coordinate bounding box of the model to be rendered according to the screen coordinate vertex array, acquiring a three-dimensional screen coordinate of a central position of the screen coordinate bounding box, and converting the three-dimensional screen coordinate of the central position into a three-dimensional world coordinate, wherein the screen coordinate bounding box comprises three-dimensional screen coordinates of all vertex positions of the model to be rendered;
s24, calculating and obtaining a placement position of the rendering camera, wherein the placement position of the rendering camera is equal to a three-dimensional world coordinate + object distance of the central position, the object distance = standard object distance is the diagonal length of the world coordinate bounding box, and the camera position vector points to the direction of the rendering camera position from the central position;
s25, traversing the world coordinate vertex array, and calculating the included angles between vectors from all vertex positions of the world coordinate vertex array to the placement positions of the rendering cameras and the camera position vectors to obtain the view angles of the rendering cameras;
s3, calling a rendering command, and rendering the model to be rendered according to the placement position and the view angle of the rendering camera;
and S4, rendering all the objects one by one according to the steps S1 to S3 until all the objects are processed.
6. The terminal for automatically rendering objects according to claim 5, wherein the step S25 is specifically:
directing a z-axis direction of the rendering camera toward the center position;
traversing the world coordinate vertex array, and calculating the included angles between vectors from all vertex positions of the world coordinate vertex array to the placement positions of the rendering cameras and the camera position vectors to obtain the maximum included angles;
and calculating and obtaining the view angle of the rendering camera, wherein the view angle of the rendering camera is = (maximum included angle + margin angle) 2, and the margin angle is used for controlling the margin from an object to the edge of the picture.
7. The terminal for automatically rendering objects according to claim 5, wherein the step S3 is specifically:
s31, setting a background color of the rendering camera;
s32, creating a render texture object of the rendering camera;
s33, calling a rendering command, and rendering the model to be rendered according to the placement position and the view angle of the rendering camera;
s34, reading picture pixel information from the render Texture object of the rendering camera to the Texture2D object, and inputting the picture pixel information into the PNG file through the PNG encoder to complete the rendering process.
8. The terminal for automatically rendering objects according to claim 5, wherein the step S4 is specifically: and rendering each model to be rendered of the model list to be rendered one by one according to the steps S1 to S3 until all objects are processed.
CN201911034055.3A 2019-10-29 2019-10-29 Method and terminal for automatically rendering object Active CN112750188B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911034055.3A CN112750188B (en) 2019-10-29 2019-10-29 Method and terminal for automatically rendering object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911034055.3A CN112750188B (en) 2019-10-29 2019-10-29 Method and terminal for automatically rendering object

Publications (2)

Publication Number Publication Date
CN112750188A CN112750188A (en) 2021-05-04
CN112750188B true CN112750188B (en) 2023-11-24

Family

ID=75640072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911034055.3A Active CN112750188B (en) 2019-10-29 2019-10-29 Method and terminal for automatically rendering object

Country Status (1)

Country Link
CN (1) CN112750188B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104103092A (en) * 2014-07-24 2014-10-15 无锡梵天信息技术股份有限公司 Real-time dynamic shadowing realization method based on projector lamp
CN105894566A (en) * 2015-12-01 2016-08-24 乐视致新电子科技(天津)有限公司 Model rendering method and device
CN107563088A (en) * 2017-09-14 2018-01-09 北京邮电大学 A kind of light field display device emulation mode based on Ray Tracing Algorithm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110082667A1 (en) * 2009-10-06 2011-04-07 Siemens Corporation System and method for view-dependent anatomic surface visualization

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104103092A (en) * 2014-07-24 2014-10-15 无锡梵天信息技术股份有限公司 Real-time dynamic shadowing realization method based on projector lamp
CN105894566A (en) * 2015-12-01 2016-08-24 乐视致新电子科技(天津)有限公司 Model rendering method and device
CN107563088A (en) * 2017-09-14 2018-01-09 北京邮电大学 A kind of light field display device emulation mode based on Ray Tracing Algorithm

Also Published As

Publication number Publication date
CN112750188A (en) 2021-05-04

Similar Documents

Publication Publication Date Title
US11410320B2 (en) Image processing method, apparatus, and storage medium
US20200265231A1 (en) Method, apparatus, and system for automatically annotating a target object in images
CN108304119B (en) Object measuring method, intelligent terminal and computer readable storage medium
CN110300292B (en) Projection distortion correction method, device, system and storage medium
JP6264972B2 (en) Display device, display control program, and display control method
CN108038886B (en) Binocular camera system calibration method and device and automobile
US9330466B2 (en) Methods and apparatus for 3D camera positioning using a 2D vanishing point grid
CN109711246B (en) Dynamic object recognition method, computer device and readable storage medium
US20170064284A1 (en) Producing three-dimensional representation based on images of a person
CN110312111A (en) The devices, systems, and methods calibrated automatically for image device
CN111080776A (en) Processing method and system for human body action three-dimensional data acquisition and reproduction
US20220075992A1 (en) Illumination detection method and apparatus for face image, and device and storage medium
US11769308B2 (en) Systems and methods of augmented reality guided image capture
CN111951333A (en) Automatic six-dimensional attitude data set generation method, system, terminal and storage medium
CN107203961B (en) Expression migration method and electronic equipment
JP2010205095A (en) Three-dimensional object recognition device, three-dimensional object recognition program, and computer readable recording medium having program recorded therein
CN110853102A (en) Novel robot vision calibration and guide method, device and computer equipment
CN109785444A (en) Recognition methods, device and the mobile terminal of real plane in image
CN112750188B (en) Method and terminal for automatically rendering object
CN116469101A (en) Data labeling method, device, electronic equipment and storage medium
CN113486941B (en) Live image training sample generation method, model training method and electronic equipment
CN115457206A (en) Three-dimensional model generation method, device, equipment and storage medium
CN112652056B (en) 3D information display method and device
CN110827411B (en) Method, device, equipment and storage medium for displaying augmented reality model of self-adaptive environment
CN108171784B (en) Rendering method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant