CN116228947A - Virtual image rendering method - Google Patents

Virtual image rendering method Download PDF

Info

Publication number
CN116228947A
CN116228947A CN202211743010.5A CN202211743010A CN116228947A CN 116228947 A CN116228947 A CN 116228947A CN 202211743010 A CN202211743010 A CN 202211743010A CN 116228947 A CN116228947 A CN 116228947A
Authority
CN
China
Prior art keywords
point cloud
cloud data
data
window
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211743010.5A
Other languages
Chinese (zh)
Inventor
禹飞
陈勇军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Technology University
Original Assignee
Shenzhen Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Technology University filed Critical Shenzhen Technology University
Priority to CN202211743010.5A priority Critical patent/CN116228947A/en
Publication of CN116228947A publication Critical patent/CN116228947A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an avatar rendering method, and belongs to the technical field of computers. The method comprises the following steps: receiving an avatar rendering instruction, the avatar rendering instruction including a game character selected by a user; controlling a user terminal to collect point cloud data and color image data of a user; the point cloud data and the color image data are acquired under the same visual angle; generating a characteristic three-dimensional image according to a preset characteristic type based on the point cloud data and the color image data; and splicing the characteristic three-dimensional image with the three-dimensional virtual image corresponding to the game role, and rendering the spliced three-dimensional virtual image to obtain a target three-dimensional virtual image. The method and the device aim to solve the technical problem that the existing game roles cannot be customized individually.

Description

Virtual image rendering method
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method for rendering an avatar.
Background
With the popularization of intelligent devices and the popularization of various applications, the demands of users for personalized interaction by using 3D images in related application scenes are increasing.
Currently, in a game application scenario, 3D images used therein are all designed by a game developer in advance and stored in a game program. In the prior art, 3D images in a game application scene are obtained through 3D modeling, and the images and styles of game roles are unified, so that the personalized requirements of game players cannot be met.
The foregoing is merely provided to facilitate an understanding of the principles of the present application and is not admitted to be prior art.
Disclosure of Invention
The main purpose of the application is to provide an avatar rendering method, which aims to solve the technical problem that the existing game roles cannot be customized individually.
In order to achieve the above object, the present application provides an avatar rendering method, comprising the steps of:
receiving an avatar rendering instruction, the avatar rendering instruction including a game character selected by a user;
controlling a user terminal to collect point cloud data and color image data of a user; the point cloud data and the color image data are acquired under the same visual angle;
generating a characteristic three-dimensional image according to a preset characteristic type based on the point cloud data and the color image data;
and splicing the characteristic three-dimensional image with the three-dimensional virtual image corresponding to the game role, and rendering the spliced three-dimensional virtual image to obtain a target three-dimensional virtual image.
Optionally, the step of generating the feature three-dimensional image according to the preset feature type based on the point cloud data and the color image data includes:
extracting feature point cloud data corresponding to a preset feature type from the point cloud data based on the preset feature type;
extracting the characteristic points from the color image data based on a preset mapping relation
Characteristic color image data corresponding to the cloud data; the mapping relation comprises a mapping relation between pixel coordinates of the established color image data and point cloud space coordinates of the point cloud data;
and after converting the characteristic point cloud data into a characteristic three-dimensional model and converting the characteristic color data into texture parameters, performing splicing processing on the characteristic three-dimensional model and the texture parameters to generate a characteristic three-dimensional image.
Optionally, the extracting the preset feature type from the point cloud data based on the preset feature type
Before the step of feature point cloud data corresponding to the feature type, the method further comprises the following steps:
carrying out noise reduction treatment on the point cloud data to obtain noise-reduced point cloud data;
the step of extracting feature point cloud data corresponding to the preset feature type from the point cloud data based on the preset feature type comprises the following steps: based on a preset feature type, extracting the preset feature from the noise-reduced point cloud data
And characteristic point cloud data corresponding to the type.
Optionally, the noise reduction processing is performed on the point cloud data to obtain noise-reduced point cloud data
The method comprises the following steps:
traversing the point cloud data and dividing the point cloud data into a plurality of point cloud windows;
clustering each point cloud window based on a mean clustering algorithm;
and carrying out noise reduction processing on the point cloud data in the point cloud window according to the clustering processing result to obtain the noise-reduced point cloud data.
Optionally, the clustering processing is performed on the point cloud data in the point cloud window according to the result of the clustering processing
Noise reduction processing, namely obtaining noise-reduced point cloud data, wherein the noise reduction processing comprises the following steps of:
if the number of the clustering centers corresponding to the point cloud windows is smaller than or equal to a preset threshold value, determining that the point clouds in the point cloud windows are effective point clouds;
if the data of the clustering center corresponding to the point cloud window is larger than a preset threshold, determining the center of the point cloud window based on the clustering center corresponding to the point cloud 0 window;
acquiring a minimum internal frame of the point cloud window, and determining that point clouds in the point cloud window and outside the minimum internal frame are noise point clouds; the center of the minimum inscription frame is the center of the point cloud window;
or constructing a reference vector according to the centers of the point cloud windows and the centers of the adjacent point cloud windows;
constructing a target vector according to the centers of the point clouds and the point cloud window aiming at each point cloud in the point cloud window;
if the included angle between the reference vector and the target vector is smaller than a preset included angle threshold value, determining that the point cloud is an effective point cloud;
if the included angle between the reference vector and the target vector is larger than or equal to a preset included angle threshold value, determining that the point cloud is a noise point cloud;
and removing the noise point cloud to obtain noise-reduced point cloud data.
Optionally, the step of determining the center of the point cloud window based on the cluster center corresponding to the point cloud window includes:
constructing a clustering polygon based on the coordinate information of the clustering center corresponding to the point cloud window; wherein each vertex of the clustering polygon is a clustering center corresponding to the point cloud window;
and calculating the mass center of the clustering polygon as the center of the point cloud window.
Optionally, the step of traversing the point cloud data and dividing the point cloud data into a plurality of point cloud windows includes:
sampling the point cloud data through an FPS algorithm to obtain N key points;
and searching and grouping adjacent points around each key point through a K-nearest neighbor algorithm, and dividing the point cloud data into a plurality of point cloud windows.
Optionally, the preset feature type includes at least one of the following: hairstyle and accessories.
Optionally, the step of rendering the spliced three-dimensional avatar to obtain the target three-dimensional avatar includes:
performing attribute configuration on the spliced three-dimensional virtual image;
rendering the three-dimensional virtual image with the configured attributes to obtain a target three-dimensional virtual image.
Compared with the prior art that the game roles cannot be customized individually, the method for rendering the virtual image receives the virtual image rendering instruction, wherein the virtual image rendering instruction comprises the game roles selected by a user; controlling a user terminal to collect point cloud data and color image data of a user; the point cloud data and the color image data are acquired under the same visual angle; generating a characteristic three-dimensional image according to a preset characteristic type based on the point cloud data and the color image data; and splicing the characteristic three-dimensional image with the three-dimensional virtual image corresponding to the game role, and rendering the spliced three-dimensional virtual image to obtain a target three-dimensional virtual image. Therefore, in the application, the characteristic three-dimensional image corresponding to the user can be generated according to the preset characteristic type, and the characteristic three-dimensional image and the three-dimensional virtual image corresponding to the game role selected by the user are spliced, so that the three-dimensional virtual image corresponding to the user individuation can be obtained, and the individuation customization of the game role is realized.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a flowchart illustrating an embodiment of an avatar rendering method of the present application.
The realization, functional characteristics and advantages of the present application will be further described with reference to the embodiments, referring to the attached drawings.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
Referring to fig. 1, fig. 1 is a flowchart illustrating a first embodiment of an avatar rendering method.
In this embodiment, the avatar rendering method includes the steps of:
step S10, receiving an avatar rendering instruction, wherein the avatar rendering instruction comprises a game role selected by a user;
step S20, controlling a user terminal to collect point cloud data and color image data of a user; the point cloud data and the color image data are acquired under the same visual angle;
step S30, generating a characteristic three-dimensional image according to a preset characteristic type based on the point cloud data and the color image data;
and step S40, splicing the characteristic three-dimensional image and the three-dimensional virtual image corresponding to the game role, and rendering the spliced three-dimensional virtual image to obtain a target three-dimensional virtual image.
Compared with the prior art that the game roles cannot be customized individually, in the embodiment, the characteristic three-dimensional image corresponding to the user can be generated according to the preset characteristic type, and the characteristic three-dimensional image and the three-dimensional virtual image corresponding to the game roles selected by the user are spliced, so that the three-dimensional virtual image corresponding to the user can be obtained, and the personalized customization of the game roles is realized. In this embodiment, only the feature three-dimensional image corresponding to the user is generated, and the feature three-dimensional image is only a part of the three-dimensional virtual image corresponding to the user personalization, instead of directly generating the three-dimensional virtual image corresponding to the user personalization, so that the consumption of computing resources of the server side is reduced, and meanwhile, the three-dimensional virtual image corresponding to the user personalization can be obtained more quickly.
The method comprises the following specific steps:
and step S10, receiving an avatar rendering instruction, wherein the avatar rendering instruction comprises a game role selected by a user.
It should be noted that, in this embodiment, the avatar rendering method is applied to the server. The application scene of the avatar rendering party is a game application scene.
As an example, the authority of the avatar rendering may be an interface authority of a game application developer or an operator deciding whether to open access data for a fee according to his own circumstances. If the virtual image is gratuitous, the user can directly initiate a rendering request of the virtual image; if the virtual image is paid, the user initiates a rendering request of virtual image rendering after completing corresponding payment. That is, the server side needs to determine whether the user pays a related fee or other restriction conditions according to the received user information (ID information or account information of the user, etc.), thereby determining whether the user satisfies the condition of granting the rendering request of the avatar.
As an example, a three-dimensional avatar storage library is provided in the server side, and three-dimensional avatars corresponding to a plurality of game characters are stored in the three-dimensional avatar storage library. Therefore, the virtual image rendering instruction sent to the server by the user terminal only needs to include the game role selected by the user, and compared with the corresponding three-dimensional virtual image, the game role selected by the user occupies smaller data volume, so that the consumption of communication resources between the user terminal and the server is reduced.
As an example, the server and the user terminal communicate data through USB protocol, LVDS protocol, MIPI protocol, etc.
Step S20, controlling a user terminal to collect point cloud data and color image data of a user; the point cloud data and the color image data are acquired under the same visual angle.
As an example, a lidar camera and a video camera are fixedly arranged on the user terminal, and a fixed transformation relationship exists between the coordinate system of the lidar camera and the coordinate system of the video camera. The laser radar camera user acquires point cloud data of the user, and the video camera is used for acquiring color image data of the user.
As an example, the user terminal may further perform three-dimensional photographing on the user from multiple angles using a three-dimensional scanning device (such as a depth camera) to obtain point cloud data of the user.
As an example, the user terminal may be stationary or active while collecting the user's point cloud data and color image data, and the shooting location may be anywhere, including but not limited to at the game developer's office, at the game designer's home, at the game user's home, and at the gaming establishment.
And step S30, generating a characteristic three-dimensional image according to a preset characteristic type based on the point cloud data and the color image data.
As an example, the step of generating a feature three-dimensional avatar according to a preset feature type based on the point cloud data and the color image data includes:
step S31, extracting feature point cloud data corresponding to a preset feature type from the point cloud data based on the preset feature type.
Wherein the preset feature type comprises at least one of the following: hairstyle and accessories. In this embodiment, the accessory includes, but is not limited to: glasses, hats, ear nails, scarves, watches, bracelets, etc.
As one example, each point cloud in the point cloud data is assigned to a different type. On the basis, the characteristic point cloud data corresponding to the preset characteristic type can be extracted from the point cloud data according to the attribution type of the point cloud.
Step S32, extracting characteristic color image data corresponding to the characteristic point cloud data from the color image data based on a preset mapping relation; the mapping relation comprises the mapping relation between the pixel coordinates of the established color image data and the point cloud space coordinates of the point cloud data.
As an example, the process of establishing the mapping relationship between the pixel coordinates of the color image data and the point cloud space coordinates of the point cloud data is as follows:
1) The same-position points are extracted from the color image data and the point cloud data, and a projection transformation matrix is solved according to a calibration model;
2) And according to the projection transformation matrix parameters, external transformation parameters of a laser radar unit coordinate system and a camera unit coordinate system are solved, and transformation of the two coordinate systems is completed, so that the mapping relation between the pixel coordinates of the color image data and the point cloud space coordinates of the point cloud data is obtained.
And step S33, after converting the characteristic point cloud data into a characteristic three-dimensional model and converting the characteristic color data into texture parameters, performing splicing processing on the characteristic three-dimensional model and the texture parameters to generate a characteristic three-dimensional image.
As an example, the step of converting the feature point cloud data into a feature three-dimensional model includes:
and constructing a point cloud body by using surface fitting (such as B spline surface fitting), constructing a space geometric model with preset characteristics by using a surface modeling method, taking a certain characteristic point of the point cloud body as a coordinate origin, thereby establishing a three-dimensional rectangular coordinate system, corresponding other characteristic values of the point cloud body to the three-dimensional rectangular coordinate system, dividing the point cloud body into a plurality of curved surface sheets, and obtaining a characteristic three-dimensional model with geometric characteristic curves after curve fitting (such as B spline surface fitting).
As one example, texture parameters include, but are not limited to: profile, color, material, etc.
As an example, the step of performing stitching processing on the feature three-dimensional model and the texture parameter to generate a feature three-dimensional image includes:
and carrying out surface fitting on the texture parameters and the characteristic three-dimensional model based on the mapping relation between the characteristic point cloud data and the characteristic color data to generate a characteristic three-dimensional image.
In an embodiment of the present application, before the step of extracting feature point cloud data corresponding to a preset feature type from the point cloud data based on the preset feature type, the method further includes:
and carrying out noise reduction processing on the point cloud data to obtain the noise-reduced point cloud data.
Based on the above, the step of extracting feature point cloud data corresponding to the preset feature type from the point cloud data based on the preset feature type includes:
and extracting feature point cloud data corresponding to the preset feature type from the noise-reduced point cloud data based on the preset feature type.
In one example, the step of performing noise reduction processing on the point cloud data to obtain noise-reduced point cloud data includes:
and A1, traversing the point cloud data, and dividing the point cloud data into a plurality of point cloud windows.
Specifically, the step of traversing the point cloud data and dividing the point cloud data into a plurality of point cloud windows includes:
sampling the point cloud data through an FPS algorithm to obtain N key points;
and searching and grouping adjacent points around each key point through a K-nearest neighbor algorithm, and dividing the point cloud data into a plurality of point cloud windows.
Step A2, clustering each point cloud window based on a mean value clustering algorithm;
as an example, clustering the point cloud window based on a mean clustering algorithm includes:
1) Initializing K clustering centers which are U1, U2 and … … Uk respectively;
2) Distributing all the point clouds in the point cloud window to a nearest cluster set according to a principle of minimum distance, wherein the distance is calculated by Euclidean distance;
3) Taking the mean value of the space coordinates of all the point clouds in each cluster set as a new cluster center;
4) Repeating the steps 1) to 3) until the clustering center is not changed any more;
5) And finally, obtaining k clustering centers corresponding to the point cloud window.
And A3, carrying out noise reduction processing on the point cloud data in the point cloud window according to the clustering processing result to obtain the noise-reduced point cloud data.
As an example, the step of performing noise reduction processing on the point cloud data in the point cloud window according to the result of the clustering processing to obtain noise reduced point cloud data includes:
if the number of the clustering centers corresponding to the point cloud windows is smaller than or equal to a preset threshold value, determining that the point clouds in the point cloud windows are effective point clouds;
if the data of the clustering center corresponding to the point cloud window is larger than a preset threshold, determining the center of the point cloud window based on the clustering center corresponding to the point cloud window;
acquiring a minimum internal frame of the point cloud window, determining that point clouds in the point cloud window and outside the minimum internal frame are noise point clouds, and determining that the point clouds in the minimum internal frame are effective point clouds; the center of the minimum inscription frame is the center of the point cloud window;
or constructing a reference vector according to the centers of the point cloud windows and the centers of the adjacent point cloud windows;
constructing a target vector according to the centers of the point clouds and the point cloud window aiming at each point cloud in the point cloud window;
if the included angle between the reference vector and the target vector is smaller than a preset included angle threshold value, determining that the point cloud is an effective point cloud;
if the included angle between the reference vector and the target vector is larger than or equal to a preset included angle threshold value, determining that the point cloud is a noise point cloud;
and removing the noise point cloud to obtain noise-reduced point cloud data.
The step of determining the center of the point cloud window based on the cluster center corresponding to the point cloud window includes:
constructing a clustering polygon based on the coordinate information of the clustering center corresponding to the point cloud window; wherein each vertex of the clustering polygon is a clustering center corresponding to the point cloud window;
and calculating the mass center of the clustering polygon as the center of the point cloud window.
The reference vector is constructed according to the center of the point cloud window and the centers of the adjacent point cloud windows, that is, the reference vector is calculated by the coordinate information of the centers of the point cloud windows and the coordinate information of the centers of the adjacent point cloud windows.
It should be noted that, the target vector is constructed according to the centers of the point cloud and the point cloud window, that is, the target vector is calculated through the coordinate information of the point cloud and the coordinate information of the center of the point cloud window where the point cloud is located.
In this embodiment, the preset threshold and the threshold included angle threshold may be set according to practical applications, which is not limited herein.
As an example, the random noise removal and abnormal point rejection of the point cloud data can be realized through a bilateral filtering algorithm, and then the damage in the point cloud body is repaired by using a least square method.
And step S40, splicing the characteristic three-dimensional image and the three-dimensional virtual image corresponding to the game role, and rendering the spliced three-dimensional virtual image to obtain a target three-dimensional virtual image.
As an example, a process of performing a stitching process of the characteristic three-dimensional character and the three-dimensional avatar corresponding to the game character:
acquiring depth data of the characteristic three-dimensional image and depth data of the three-dimensional virtual image corresponding to the game role;
fusing the depth data of the characteristic three-dimensional image and the depth data of the three-dimensional virtual image corresponding to the game role according to the position relationship between the characteristic three-dimensional image and the three-dimensional virtual image corresponding to the game role to obtain fused depth data;
and converting the fusion depth data into the spliced three-dimensional virtual image.
The specific implementation process of converting the fusion depth data into the spliced three-dimensional virtual image is the same as the specific implementation process of converting the feature point cloud data into the feature three-dimensional model, and will not be described herein.
As an example, the step of rendering the spliced three-dimensional avatar to obtain the target three-dimensional avatar includes:
performing attribute configuration on the spliced three-dimensional virtual image;
rendering the three-dimensional virtual image with the configured attributes to obtain a target three-dimensional virtual image.
The attributes used when the spliced three-dimensional virtual images are configured are pre-designed basic data, and states required by users can be given to game roles, so that the game roles are of creation types expected by the users, and have certain functions in the application to execute certain tasks. Because the game characters are characters uniformly developed by game developers, the characters are not completely satisfactory to users, and the users can secondarily adjust the generated game characters (spliced three-dimensional virtual images) according to own expectations, so that the game characters have visual effects of virtual-real combination, and personalized expectations of the users are met.
For example, the attributes used in configuring the attributes of the spliced three-dimensional avatar may be pre-designed functional parameters including, but not limited to: behavior parameters, expression parameters, tracking parameters, release skill parameters, etc. For example, in online games, each original character may contain behavior parameters, tracking parameters, etc., configuring the character model with a plurality of fusion deformations, where each fusion deformation may correspond to a polygonal mesh or point cloud or any other representation of a geometric three-dimensional surface suitable for limb movements or/and surfaces such as facial expressions, thereby enabling the original character to have activity; meanwhile, tracking parameters, namely weights comprising fusion deformation, are configured and combined in a weighted mode to generate limb actions or/and facial expressions, so that the original role can achieve certain functions in advance according to game scenario setting requirements.
For example, the attributes used in configuring the attributes of the spliced three-dimensional avatar may also be pre-designed logic parameters, including but not limited to: the sequencer, the selector, the circulator, the random and the like can make the function parameters and the condition parameters according to a certain logic flow, so that the game role has a soul or mental function and is active. For example, in online games, the character of the game moves, is expressive and even releases skill, and is not just a mere appearance effect, but rather a more cool effect, similar to a real person, without the support of logic parameters, so that the character is attractive and more engaging, and thus the user player is more interested in playing the game application.
In addition, the target three-dimensional avatar may be adjusted according to the actual needs of the user. For example, in the network game, the user does not feel or like the current target three-dimensional avatar, and the final avatar image may be obtained by changing all details and large contours of hairstyles, colors, faces, complexion, even fat and thin, chest circumference, etc. to be shaped according to the aesthetic pleasing to the user himself/herself, so as to obtain the user's desired similar avatar effect.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The embodiment numbers are merely for the purpose of description and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, a device, or a network device, etc.) to perform the method described in the embodiments of the present application.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the claims, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the claims of the present application.

Claims (9)

1. An avatar rendering method, characterized in that the avatar rendering method comprises the steps of:
receiving an avatar rendering instruction, the avatar rendering instruction including a game character selected by a user;
controlling a user terminal to collect point cloud data and color image data of a user; the point cloud data and the color image data are acquired under the same visual angle;
generating a characteristic three-dimensional image according to a preset characteristic type based on the point cloud data and the color image data;
and splicing the characteristic three-dimensional image with the three-dimensional virtual image corresponding to the game role, and rendering the spliced three-dimensional virtual image to obtain a target three-dimensional virtual image.
2. The avatar rendering method of claim 1, wherein the step of generating the feature three-dimensional avatar according to a preset feature type based on the point cloud data and the color image data comprises:
extracting feature point cloud data corresponding to a preset feature type from the point cloud data based on the preset feature type;
extracting characteristic color image data corresponding to the characteristic point cloud data from the color image data based on a preset mapping relation; the mapping relation comprises a mapping relation between pixel coordinates of the established color image data and point cloud space coordinates of the point cloud data;
and after converting the characteristic point cloud data into a characteristic three-dimensional model and converting the characteristic color data into texture parameters, performing splicing processing on the characteristic three-dimensional model and the texture parameters to generate a characteristic three-dimensional image.
3. The avatar rendering method of claim 2, wherein before the step of extracting feature point cloud data corresponding to the preset feature type from the point cloud data based on the preset feature type, the method further comprises:
carrying out noise reduction treatment on the point cloud data to obtain noise-reduced point cloud data;
the step of extracting feature point cloud data corresponding to the preset feature type from the point cloud data based on the preset feature type comprises the following steps:
and extracting feature point cloud data corresponding to the preset feature type from the noise-reduced point cloud data based on the preset feature type.
4. The avatar rendering method of claim 3, wherein the step of performing noise reduction processing on the point cloud data to obtain noise-reduced point cloud data comprises:
traversing the point cloud data and dividing the point cloud data into a plurality of point cloud windows;
clustering each point cloud window based on a mean clustering algorithm;
and carrying out noise reduction processing on the point cloud data in the point cloud window according to the clustering processing result to obtain the noise-reduced point cloud data.
5. The avatar rendering method of claim 4, wherein the step of performing noise reduction processing on the point cloud data in the point cloud window according to the result of the clustering processing to obtain noise reduced point cloud data comprises:
if the number of the clustering centers corresponding to the point cloud windows is smaller than or equal to a preset threshold value, determining that the point clouds in the point cloud windows are effective point clouds;
if the data of the clustering center corresponding to the point cloud window is larger than a preset threshold, determining the center of the point cloud window based on the clustering center corresponding to the point cloud window;
acquiring a minimum internal frame of the point cloud window, and determining that point clouds in the point cloud window and outside the minimum internal frame are noise point clouds; the center of the minimum inscription frame is the center of the point cloud window;
or constructing a reference vector according to the centers of the point cloud windows and the centers of the adjacent point cloud windows;
constructing a target vector according to the centers of the point clouds and the point cloud window aiming at each point cloud in the point cloud window;
if the included angle between the reference vector and the target vector is smaller than a preset included angle threshold value, determining that the point cloud is an effective point cloud;
if the included angle between the reference vector and the target vector is larger than or equal to a preset included angle threshold value, determining that the point cloud is a noise point cloud;
and removing the noise point cloud to obtain noise-reduced point cloud data.
6. The avatar rendering method of claim 5, wherein the step of determining the center of the point cloud window based on the cluster center corresponding to the point cloud window comprises:
constructing a clustering polygon based on the coordinate information of the clustering center corresponding to the point cloud window; wherein each vertex of the clustering polygon is a clustering center corresponding to the point cloud window;
and calculating the mass center of the clustering polygon as the center of the point cloud window.
7. The avatar rendering method of claim 4, wherein the step of traversing the point cloud data and dividing the point cloud data into a plurality of point cloud windows comprises:
sampling the point cloud data through an FPS algorithm to obtain N key points;
and searching and grouping adjacent points around each key point through a K-nearest neighbor algorithm, and dividing the point cloud data into a plurality of point cloud windows.
8. The avatar rendering method of claim 3, wherein the preset feature type includes at least one of: hairstyle and accessories.
9. The avatar rendering method of claim 1, wherein the step of rendering the spliced three-dimensional avatar to obtain the target three-dimensional avatar comprises:
performing attribute configuration on the spliced three-dimensional virtual image;
rendering the three-dimensional virtual image with the configured attributes to obtain a target three-dimensional virtual image.
CN202211743010.5A 2022-12-29 2022-12-29 Virtual image rendering method Pending CN116228947A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211743010.5A CN116228947A (en) 2022-12-29 2022-12-29 Virtual image rendering method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211743010.5A CN116228947A (en) 2022-12-29 2022-12-29 Virtual image rendering method

Publications (1)

Publication Number Publication Date
CN116228947A true CN116228947A (en) 2023-06-06

Family

ID=86581649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211743010.5A Pending CN116228947A (en) 2022-12-29 2022-12-29 Virtual image rendering method

Country Status (1)

Country Link
CN (1) CN116228947A (en)

Similar Documents

Publication Publication Date Title
US11182615B2 (en) Method and apparatus, and storage medium for image data processing on real object and virtual object
JP7387202B2 (en) 3D face model generation method, apparatus, computer device and computer program
US20090202114A1 (en) Live-Action Image Capture
CN112891943B (en) Lens processing method and device and readable storage medium
CN112138386A (en) Volume rendering method and device, storage medium and computer equipment
CN111729314A (en) Virtual character face pinching processing method and device and readable storage medium
CN111142967B (en) Augmented reality display method and device, electronic equipment and storage medium
CN115082608A (en) Virtual character clothing rendering method and device, electronic equipment and storage medium
CN110580677A (en) Data processing method and device and data processing device
CN113230652B (en) Virtual scene transformation method and device, computer equipment and storage medium
US7006102B2 (en) Method and apparatus for generating models of individuals
CN117083641A (en) Real-time experience real-size eye wear device
CN113426129A (en) User-defined role appearance adjusting method, device, terminal and storage medium
EP4027294A1 (en) 3d data system, server, and 3d data processing method
CN116228947A (en) Virtual image rendering method
CN111589114B (en) Virtual object selection method, device, terminal and storage medium
CN114742970A (en) Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device
TW202228827A (en) Method and apparatus for displaying image in virtual scene, computer device, computer-readable storage medium, and computer program product
CN112785490A (en) Image processing method and device and electronic equipment
US20220118358A1 (en) Computer-readable recording medium, and image generation system
US11983819B2 (en) Methods and systems for deforming a 3D body model based on a 2D image of an adorned subject
CN115719392A (en) Virtual character generation method and device
CN118119979A (en) Hidden surface removal for layered apparel of avatar body
CN114288647A (en) Artificial intelligence game engine based on AI Designer, game rendering method and device
CN117959704A (en) Virtual model placement method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination