CN107622522B - Method and device for generating game material - Google Patents

Method and device for generating game material Download PDF

Info

Publication number
CN107622522B
CN107622522B CN201710676383.8A CN201710676383A CN107622522B CN 107622522 B CN107622522 B CN 107622522B CN 201710676383 A CN201710676383 A CN 201710676383A CN 107622522 B CN107622522 B CN 107622522B
Authority
CN
China
Prior art keywords
user
body part
game
contour information
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710676383.8A
Other languages
Chinese (zh)
Other versions
CN107622522A (en
Inventor
周意保
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710676383.8A priority Critical patent/CN107622522B/en
Publication of CN107622522A publication Critical patent/CN107622522A/en
Application granted granted Critical
Publication of CN107622522B publication Critical patent/CN107622522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method and a device for generating game materials, wherein the method comprises the following steps: acquiring depth information used for creating an avatar of a user in a game, and constructing a 3D model of the user according to the depth information, wherein the depth information is generated after structured light is projected to the user; identifying contour information of each body part of the user from the 3D model; generating a target material corresponding to each body part by utilizing the contour information of the body part aiming at each body part; and updating the material library of the game by using the target materials. By the method, the user can construct the personalized game character image by using the materials in the material library of the game, the user can conveniently and quickly distinguish the game character image, the user can replace the game character image at any time by using the materials in the material library according to the requirement of the user, the user experience is improved, the cohesion of the game is improved, and the technical problems that the user is difficult to distinguish the game character image and the cohesion of the game is low in the prior art are solved.

Description

Method and device for generating game material
Technical Field
The invention relates to the field of terminal equipment, in particular to a method and a device for generating game materials.
Background
At present, in some game scenes, designers of the game provide some game character images for users, and the users can select favorite game character images in the game. However, the game character images provided by the game for the user are generally a limited number of character images, and when the user sets the game character images, the user selects the required character images from the limited number of character images to control and operate.
However, in the existing game, the situation that a plurality of users adopt the same game character image often occurs, so that the users are difficult to distinguish the game character images from each other in time, and further cannot operate in time. At present, no related technology exists for enabling users to customize personalized game figure images according to the needs of the users, so that the cohesion of the existing games is low.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the invention provides a method for generating game materials, which comprises the steps of identifying outline information of each body part of a user from a 3D model by obtaining the 3D model of the user, generating corresponding target materials based on the outline information and updating a material library of a game, so that the user can construct an individualized game character image by using the materials in the material library of the game, and the technical problems that the user can hardly distinguish the game character image of the user and the game cohesion is low in the prior art are solved.
The invention also provides a device for generating the game materials.
The invention also proposes a non-volatile computer-readable storage medium.
The invention also provides computer equipment.
The embodiment of the first aspect of the invention provides a method for generating game materials, which comprises the following steps:
acquiring the depth information for creating an avatar of a user in a game, and constructing a 3D model of the user according to the depth information, wherein the depth information is generated after structured light is projected to the user;
identifying contour information of each body part of the user from the 3D model;
for each body part, generating a target material corresponding to the body part by using the contour information of the body part;
updating a material library of the game by using the target materials; wherein the material library stores at least one material of each body part.
According to the method for generating the game material, the depth information used for creating the virtual image of the user in the game is obtained, the 3D model of the user is built according to the depth information, the contour information of each body part of the user is identified from the 3D model, the contour information of each body part is utilized to generate the target material corresponding to the body part, and the material library of the game is updated by utilizing the target material. Therefore, the user can use the materials in the material library of the game to construct an individualized game character image, the game character image of the user is different from those of other users, the user can conveniently and quickly distinguish the game character image of the user, the user can also use the materials in the material library to change the game character image at any time according to the requirement of the user, the user experience is improved, and the cohesion of the game is improved. The materials of each body part of the user are stored in the material library of the game, so that the user can construct an individualized game character image according to the materials in the material library, and the constructed game character image is similar to the self image of the user, so that the user can easily distinguish the self game character image, and the technical problem that the user is difficult to distinguish the self game character image in the prior art is solved.
The embodiment of the second aspect of the invention provides a device for generating game materials, which comprises:
an obtaining module, configured to obtain the depth information used to create an avatar of a user in a game, and construct a 3D model of the user according to the depth information, where the depth information is generated after structured light is projected to the user;
an identification module for identifying contour information of each body part of the user from the 3D model;
the generating module is used for generating a target material corresponding to each body part by utilizing the contour information of the body part;
and the updating module is used for updating the material library of the game by utilizing the target material.
According to the device for generating the game materials, the depth information used for creating the virtual image of the user in the game is acquired, the 3D model of the user is built according to the depth information, the contour information of each body part of the user is identified from the 3D model, the contour information of each body part is used for generating the target materials corresponding to the body part, and the material library of the game is updated by using the target materials. Therefore, the user can use the materials in the material library of the game to construct an individualized game character image, the game character image of the user is different from those of other users, the user can conveniently and quickly distinguish the game character image of the user, the user can also use the materials in the material library to change the game character image at any time according to the requirement of the user, the user experience is improved, and the cohesion of the game is improved. The materials of each body part of the user are stored in the material library of the game, so that the user can construct an individualized game character image according to the materials in the material library, and the constructed game character image is similar to the self image of the user, so that the user can easily distinguish the self game character image, and the technical problem that the user is difficult to distinguish the self game character image in the prior art is solved.
A third aspect of the invention provides one or more non-transitory computer-readable storage media embodying computer-executable instructions that, when executed by one or more processors, cause the processors to perform a method of generating game material as described in embodiments of the first aspect.
A fourth aspect of the present invention provides a computer device, including a memory and a processor, where the memory stores computer readable instructions, and the instructions, when executed by the processor, cause the processor to execute the method for generating game material according to the first aspect of the present invention.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart illustrating a method for generating game materials according to an embodiment of the present invention;
FIG. 2 is a schematic view of an apparatus assembly for projecting structured light;
FIG. 3 is a schematic diagram of uniform structured light provided by an embodiment of the present invention;
FIG. 4 is a flow chart illustrating a method for generating game materials according to another embodiment of the present invention;
FIG. 5 is a schematic illustration of non-uniform structured light in an embodiment of the present invention;
FIG. 6 is a flow chart illustrating a method for generating game materials according to another embodiment of the present invention;
FIG. 7 is a schematic structural diagram of an apparatus for generating game materials according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of an apparatus for generating game materials according to another embodiment of the present invention;
fig. 9 is a schematic structural diagram of an image processing circuit in a terminal according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
A method and apparatus for generating game materials according to an embodiment of the present invention will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a method for generating game materials according to an embodiment of the present invention.
As shown in fig. 1, the method of generating game materials includes the steps of:
step 101, obtaining depth information used for creating an avatar of a user in a game, and constructing a 3D model of the user according to the depth information, wherein the depth information is generated after structured light is projected to the user.
Among them, the projection set of the known spatial direction light beam is called structured light (structured light).
As an example, FIG. 2 is a schematic diagram of an apparatus assembly for projecting structured light. The projection set of structured light is merely illustrated as a set of lines in fig. 2, and the principle for structured light as a speckle pattern for the projection set is similar. As shown in fig. 2, the apparatus may include an optical projector and a camera, wherein the optical projector projects a pattern of structured light into a space where an object to be measured (user) is located, forming a three-dimensional image of a light bar modulated by the shape of the body surface on the body surface of the user. The three-dimensional image is detected by a camera at another location to obtain a distorted two-dimensional image of the light bar. The degree of distortion of the light bar depends on the relative position between the optical projector and the camera and the contour of the user's body surface, intuitively, the displacement (or offset) displayed along the light bar is proportional to the height of the user's body surface, the distortion represents the change of the plane, the physical gap of the user's body surface is discontinuously displayed, and when the relative position between the optical projector and the camera is fixed, the three-dimensional contour of the user's body surface can be reproduced by the distorted light bar two-dimensional image coordinates, i.e. the 3D model of the user is obtained.
As an example, the face 3D model can be obtained by calculation using formula (1), where formula (1) is as follows:
Figure BDA0001374409470000041
wherein (x, y, z) is the coordinates of the acquired 3D model of the user, b is the baseline distance between the projection device and the camera, F is the focal length of the camera, θ is the projection angle when the projection device projects the preset structured light pattern to the space where the user is located, and (x ', y') is the coordinates of the two-dimensional distorted image of the user.
As an example, the types of the structured light include a grating type, a light spot type, a speckle type (including a circular speckle and a cross speckle), and the structured light is uniformly arranged as shown in fig. 3. Correspondingly, the device for generating structured light may be some kind of projection device or instrument, such as an optical projector, which projects a light spot, line, grating, grid or speckle onto the object to be examined, but also a laser, which generates a laser beam.
In this embodiment, a structured light transmitter may be installed on a terminal device such as a computer, a mobile phone, a handheld computer, or the like, and the structured light transmitter is configured to transmit structured light to a user.
After the terminal device acquires the depth information for creating the avatar of the user in the game through the structured light, feature point data can be extracted from the depth information, and the feature points are connected into a network according to the extracted feature point data. For example, according to the distance relationship of each point in space, points of the same plane or points with distances within a threshold range are connected into a triangular network, and then the networks are spliced to construct a 3D model of the user.
Step 102, identifying contour information of each body part of the user from the 3D model.
The 3D model can intuitively display the body contour information of the user, so that, in the embodiment, after the 3D model of the user is constructed, the contour information of each body part of the user can be recognized from the 3D model, for example, the contour information of each facial organ, the contour information of the head, and the like of the user can be recognized.
Step 103, generating a target material corresponding to the body part by using the contour information of the body part for each body part.
In this embodiment, after the contour information of each body part of the user is recognized, the target material corresponding to the body part can be generated for each body part by using the recognized contour information.
It can be appreciated that the target material may fully or nearly fully embody the user's body part.
And 104, updating a material library of the game by using the target materials, wherein at least one material of each body part is stored in the material library.
In order to facilitate the user to quickly create the virtual image in the game, a material library storing materials of each body part of the user can be established in advance. The material library can be stored in a local memory of the terminal equipment, and when a user creates an avatar in a game, the material library can select materials from the local memory; the material library can also be stored in an account of the game played by the user, and the user can directly select required materials from the account when creating the virtual image in the game. The material stock is stored in the game account, so that the storage space of the terminal equipment can be saved, the limitation on the terminal equipment can be relieved, a user can log in the same game account on different terminal equipment, the virtual image in the game can be created or updated, and the experience is better.
In the embodiment, after the target materials of all body parts are generated according to the contour information recognized from the 3D model, the material library of the game can be updated by using the target materials, so that the materials in the material library are richer, a user can create more virtual images, and the user experience is improved.
The method for generating game materials of the embodiment is used for generating target materials corresponding to body parts by acquiring depth information used for creating an avatar of a user in a game, constructing a 3D model of the user according to the depth information, identifying outline information of each body part of the user from the 3D model, generating the target materials corresponding to the body parts by using the outline information of the body parts for each body part, and updating a material library of the game by using the target materials. Therefore, the user can use the materials in the material library of the game to construct an individualized game character image, the game character image of the user is different from those of other users, the user can conveniently and quickly distinguish the game character image of the user, the user can also use the materials in the material library to change the game character image at any time according to the requirement of the user, the user experience is improved, and the cohesion of the game is improved. The materials of each body part of the user are stored in the material library of the game, so that the user can construct an individualized game character image according to the materials in the material library, and the constructed game character image is similar to the self image of the user, so that the user can easily distinguish the self game character image, and the technical problem that the user is difficult to distinguish the self game character image in the prior art is solved.
In order to more clearly illustrate a specific implementation process of acquiring depth information for creating an avatar of a user in a game and constructing a 3D model of the user according to the depth information, another method for generating game materials is provided in the embodiments of the present invention. Fig. 4 is a flowchart illustrating a method for generating game materials according to another embodiment of the present invention.
As shown in fig. 4, on the basis of the embodiment shown in fig. 1, step 101 may include the following steps:
step 201, projecting structured light to a user.
The terminal equipment can be provided with a structured light projector, when a user starts a game with an avatar installed in the terminal equipment, such as a landlord, a bluemoon legend and the like, the structured light projector can be called through the started game client, and the structured light projector projects structured light to the user.
Step 202, collecting reflected light formed on the body of the user and forming depth information.
When structured light projected to a user by a structured light projector in terminal equipment reaches the user, the structured light is reflected at the body of the user to form reflected light because each part on the body can block the structured light. At this time, reflected light formed on the body of the user by the structured light may be collected by a camera mounted in the terminal device, and the depth information may be formed using the collected reflected light.
Further, the depth information of the user may include background information in addition to the user himself, and at this time, the depth information may be subjected to denoising and smoothing processing to obtain an image of an area where the user is located, and then the user and the background image are segmented through processing such as foreground and background segmentation to obtain the depth information of the user.
Step 203, building a 3D model of the user according to the depth information.
In this embodiment, after the depth information of the user is obtained, the 3D model of the user may be further constructed according to the depth information.
Specifically, each feature point data for constructing the 3D model may be extracted from the depth information, and the feature points may be connected into a network according to the extracted feature point data. For example, according to the distance relationship of each point in space, points of the same plane or points with distances within a threshold range are connected into a triangular network, and then the networks are spliced, so that the 3D model of the user can be generated.
According to the method for generating the game material, the structured light is projected to the user, the reflected light formed on the body of the user is collected, the depth information is formed, the 3D model of the user is constructed according to the depth information, a foundation can be laid for identifying the contour information of each body part of the user and generating the corresponding target material by utilizing the contour information, and therefore authenticity and accuracy of the generated target material are guaranteed.
It should be noted here that, as an example, the structured light adopted in the above-described embodiment may be non-uniform structured light, which is a speckle pattern or a random dot pattern formed by a set of a plurality of light spots and is formed by a diffractive optical element in a projection device disposed on the terminal.
FIG. 5 is a schematic diagram of non-uniform structured light in an embodiment of the present invention. As shown in fig. 5, the non-uniform structured light is adopted in the embodiment of the present invention, where the non-uniform structured light is a randomly arranged speckle pattern, that is, the non-uniform structured light is a set of a plurality of light spots, and the plurality of light spots are arranged in a non-uniform dispersion manner, so as to form a speckle pattern. Because the storage space occupied by the speckle patterns is small, the operation efficiency of the terminal equipment cannot be greatly influenced when the projection device operates, and the storage space of the terminal can be saved.
In addition, compared with other existing structured light types, the speckle patterns adopted in the embodiment of the invention can reduce energy consumption, save electric quantity and improve cruising ability of the terminal through hash arrangement.
In the embodiment of the invention, the projection device and the camera can be arranged in terminal equipment such as a computer, a mobile phone, a palm computer and the like. The projection device emits a non-uniform structured light, i.e., a speckle pattern, toward the user. In particular, a speckle pattern may be formed using a diffractive optical element in the projection device, wherein a certain number of reliefs are provided on the diffractive optical element, and an irregular speckle pattern is generated by an irregular relief on the diffractive optical element. In embodiments of the present invention, the depth and number of relief grooves may be set by an algorithm.
The projection device can be used for projecting a preset speckle pattern to the space where the measured object is located. The camera can be used for collecting the measured object with the projected speckle pattern so as to obtain a two-dimensional distorted image of the measured object with the speckle pattern.
In the embodiment of the invention, when the camera of the terminal is aligned with the user, the projection device in the terminal can project a preset speckle pattern to the space where the user is located, the speckle pattern has a plurality of scattered spots, and when the speckle pattern is projected onto the body surface of the user, the scattered spots in the speckle pattern can be shifted due to various body parts included in the body surface. The non-uniform structured light reflected by the body of the user is collected through a camera of the terminal device, and a two-dimensional distorted image of the user with the speckle pattern is obtained.
Further, image data calculation is performed on the collected speckle image and the reference speckle image according to a predetermined algorithm, and the movement distance of each scattered spot (characteristic point) of the collected speckle image relative to the reference scattered spot (reference characteristic point) is acquired. And finally, according to the moving distance, the distance between the reference speckle image and the camera on the terminal and the relative interval value between the projection device and the camera, obtaining the depth information of each scattered spot of the speckle infrared image by using a trigonometry method, further obtaining the depth information of the user, and reconstructing the 3D model of the user based on the depth information.
Fig. 6 is a flowchart illustrating a method for generating game materials according to another embodiment of the present invention.
As shown in fig. 6, the method of generating game materials may include the steps of:
step 301, obtaining depth information for creating an avatar of a user in a game, and constructing a 3D model of the user according to the depth information.
It should be noted that, in the embodiment of the present invention, for the description of step 301, reference may be made to the description of obtaining depth information and creating a 3D model according to the depth information in the foregoing embodiment, and the implementation principle is similar, and details are not described here again.
In step 302, feature points belonging to each body part are identified from the 3D model.
When the structured light projected to the user is a pattern formed by a plurality of light spots, for example, the structured light of a uniform spot type structured light, a non-uniform speckle pattern or a random spot pattern is projected to the user, the constructed 3D model is the body information of the user formed by a plurality of points, and the points capable of representing the body information of the user are the feature points. Thus, in the present embodiment, feature points belonging to each body part can be identified from the constructed 3D model.
Step 303, for each body part, constructing the body part and acquiring contour information of the body part based on the depth information of the feature points of the body part.
In this embodiment, after the feature points of each body part are identified from the 3D model, each body part may be further constructed based on the depth information corresponding to the feature points of the body part for each body part, and the contour information of the body part may be acquired.
Specifically, for each body part, the body part can be constructed by arranging the respective feature points constituting the body part according to the depth information of the respective feature points in the body part. And then connecting the characteristic points at the edge of the body part to obtain the contour information of the body part.
And step 304, acquiring pre-stored reference contour information of the body part for each body part.
The same body part may include different profile information, such as eyebrows, which may be brow in one line, sword brow, willow leaf brow, etc., and the profile information is naturally different for different eyebrow shapes. Therefore, the contour information of the type that each body part may include can be stored in the memory in advance as the reference contour information, so as to accurately determine the type of the body part corresponding to the contour information.
In a possible implementation manner of the embodiment of the present invention, the type of the body part may be identified according to the obtained contour information of the body part, and then the reference contour information corresponding to the type may be obtained from the pre-stored contour information.
For example, profile information corresponding to four types of faces, namely melon seed face, goose egg face, round face and Chinese character face, is stored in the memory in advance, and if the user's face is identified as the goose egg face according to the profile information of the face acquired by the constructed face, the profile information of the goose egg face is acquired from the memory as reference profile information.
Step 305, comparing the contour information with reference contour information.
And step 306, if the difference between the contour information and the reference contour information exceeds a preset threshold value, generating a target material according to the contour information.
The threshold is preset, and different thresholds can be set for different body parts. The value of the threshold can be set by developers during algorithm design, and the specific value of the threshold is not limited by the invention. It should be understood that the smaller the threshold, the more target material is generated, the more material is stored in the material library, and the greater the user's selectivity in creating or replacing the avatar.
In this embodiment, after the reference contour information of the body part is obtained, the contour information obtained from the constructed body part may be compared with the reference contour information, whether the difference between the contour information and the reference contour information exceeds a preset threshold value is determined, and if the difference exceeds the threshold value, the target material is generated according to the contour information.
As an example, a point with the largest profile difference between the profile information and the reference profile information may be selected, the depth information of the point is obtained, the difference between the two depth information is calculated, the obtained difference is compared with a preset threshold, and when the difference is greater than the threshold, the target material is generated according to the profile information.
And 307, determining a preset range corresponding to the difference according to the difference and the threshold.
In order to better adapt to terminal equipment and games used by users, the materials stored in the material library can be divided according to different precisions, corresponding ranges are set for the divided materials, and different differences and threshold values are set to correspond to the different ranges.
Therefore, in this embodiment, the preset range corresponding to the difference may be determined according to the difference between the profile information and the reference profile information and a preset threshold.
And 308, acquiring a first material and a standard material which belong to a preset range from a material library.
After the preset range corresponding to the difference is obtained, the first material and the standard material belonging to the preset range can be further obtained from the material library.
Step 309, the target material and the first material are compared with the standard material, respectively.
And step 310, replacing the first material with the target material and storing the target material into a material library if the difference between the target material and the standard material is smaller than the difference between the first material and the standard material.
In this embodiment, after the first material and the standard material that belong to the preset range are obtained from the material library, the target material may be further compared with the standard material, the first material may be compared with the standard material, and a difference between the target material and the standard material and a difference between the first material and the standard material may be determined. And if the difference between the target material and the standard material is smaller than the difference between the first material and the standard material, replacing the first material with the target material and storing the first material into the material library so as to ensure the accuracy of the material stored in the material library.
The method for generating game materials of the embodiment can ensure the accuracy of the acquired contour information by acquiring the depth information for creating the avatar of the user in the game, constructing the 3D model of the user according to the depth information, identifying the feature points belonging to each body part from the 3D model, constructing the body part based on the depth information of the feature points and acquiring the contour information. The method comprises the steps of acquiring pre-stored reference contour information of each body part, comparing the contour information with the reference contour information, and generating a target material according to the contour information when the difference between the contour information and the reference contour information exceeds a preset threshold value, so that a large material can be stored in a material library, and the selectivity of a user is improved. According to the difference and the threshold value, the preset range corresponding to the difference is determined, the first material and the standard material which belong to the preset range are obtained from the material library, the target material and the first material are respectively compared with the standard material, and when the difference between the target material and the standard material is smaller than the difference between the first material and the standard material, the target material is used for replacing the first material and storing the first material into the material library, so that the accuracy of the material stored in the material library can be ensured, the material library can store the materials with different accuracies, and different terminal equipment and game requirements can be met. By the method for generating the game materials, the user can construct the personalized game character image by using the materials in the material library of the game, so that the game character image of the user is different from the game character images of other users, the user can conveniently and quickly distinguish the game character image of the user, the user can replace the game character image at any time by using the materials in the material library according to the requirement of the user, the user experience is improved, and the cohesion of the game is improved.
The invention also provides a device for generating the game materials.
Fig. 7 is a schematic structural diagram of an apparatus for generating game materials according to an embodiment of the present invention.
As shown in fig. 7, the apparatus for generating game materials includes: an acquisition module 710, an identification module 720, a generation module 730, and an update module 740. Wherein the content of the first and second substances,
an obtaining module 710, configured to obtain depth information for creating an avatar of a user in a game, and construct a 3D model of the user according to the depth information, where the depth information is generated after structured light is projected to the user.
And the identifying module 720 is used for identifying the contour information of each body part of the user from the 3D model.
The generating module 730 is configured to generate, for each body part, a target material corresponding to the body part by using the contour information of the body part.
And an updating module 740 for updating the material library of the game with the target material.
Further, the obtaining module 710 is specifically configured to: structured light is projected towards the user, reflected light formed on the user's body is collected and depth information is formed, and a 3D model of the user is constructed from the depth information.
As one example, the structured light may be non-uniform structured light. The non-uniform structured light is a speckle pattern or a random dot pattern formed by a plurality of light spots in a set, and is formed by a diffraction optical element arranged in a projection device on the terminal, wherein a certain number of embossments are arranged on the diffraction optical element, and the groove depths of the embossments are different.
Optionally, in a possible implementation manner of the embodiment of the present invention, as shown in fig. 8, on the basis of the embodiment shown in fig. 7, the identifying module 720 may further include:
an identification unit 721 for identifying feature points belonging to each body part from the 3D model.
A first obtaining unit 722, configured to construct the body part and obtain contour information of the body part based on the depth information of the feature points of the body part for each body part.
The generating module 730 may include:
a second obtaining unit 731 for obtaining pre-stored reference contour information of the body part.
Specifically, the second obtaining unit 731 is configured to identify a type corresponding to the body part, and obtain reference contour information corresponding to the type.
The comparing unit 732 compares the contour information with reference contour information.
The generation unit 733 generates the target material from the contour information when the difference between the contour information and the reference contour information exceeds a preset threshold.
Further, the update module 740 is specifically configured to: determining a preset range corresponding to the difference according to the difference and the threshold value; acquiring a first material and a standard material which belong to a preset range from a material library; respectively comparing the target material and the first material with the standard material; and if the difference between the target material and the standard material is smaller than that between the first material and the standard material, replacing the first material with the target material and storing the first material into the material library.
It should be noted that the foregoing explanation of the embodiment of the method for generating game materials is also applicable to the apparatus for generating game materials of this embodiment, and the implementation principle is similar, and will not be described herein again.
The apparatus for generating game materials of the embodiment identifies contour information of each body part of a user from a 3D model by acquiring depth information for creating an avatar of the user in a game and constructing the 3D model of the user according to the depth information, generates target materials corresponding to the body parts for each body part by using the contour information of the body part, and updates a material library of the game by using the target materials. Therefore, the user can use the materials in the material library of the game to construct an individualized game character image, the game character image of the user is different from those of other users, the user can conveniently and quickly distinguish the game character image of the user, the user can also use the materials in the material library to change the game character image at any time according to the requirement of the user, the user experience is improved, and the cohesion of the game is improved. The materials of each body part of the user are stored in the material library of the game, so that the user can construct an individualized game character image according to the materials in the material library, and the constructed game character image is similar to the self image of the user, so that the user can easily distinguish the self game character image, and the technical problem that the user is difficult to distinguish the self game character image in the prior art is solved.
The present invention also contemplates one or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the method of generating game material described in the foregoing embodiments.
The invention also provides a terminal. The terminal includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. Fig. 9 is a schematic structural diagram of an image processing circuit in a terminal according to an embodiment of the present invention. As shown in fig. 9, for convenience of explanation, only aspects of the image processing technique related to the embodiment of the present invention are shown.
As shown in fig. 9, the image processing circuit 80 includes an imaging device 810, an ISP processor 830 and control logic 840. The imaging device 810 may include a camera with one or more lenses 812, an image sensor 814, and a structured light projector 816. The structured light projector 816 projects structured light to the object to be measured. The structured light pattern may be a laser stripe, a gray code, a sinusoidal stripe, or a randomly arranged speckle pattern. The image sensor 814 captures a structured light image projected onto the object to be measured, and transmits the structured light image to the ISP processor 830, and the ISP processor 830 demodulates the structured light image to obtain depth information of the object to be measured. Meanwhile, the image sensor 814 may also capture color information of the measured object. Of course, the two image sensors 814 may capture the structured light image and the color information of the measured object, respectively.
Taking speckle structured light as an example, the ISP processor 830 demodulates the structured light image, specifically including acquiring a speckle image of the measured object from the structured light image, performing image data calculation on the speckle image of the measured object and the reference speckle image according to a predetermined algorithm, and obtaining a moving distance of each scattered spot of the speckle image on the measured object relative to a reference scattered spot in the reference speckle image. And (4) converting and calculating by using a trigonometry method to obtain the depth value of each scattered spot of the speckle image, and obtaining the depth information of the measured object according to the depth value.
Of course, the depth image information and the like may be acquired by a binocular vision method or a method based on the time difference of flight TOF, and the method is not limited thereto, as long as the depth information of the object to be measured can be acquired or obtained by calculation, and all methods fall within the scope of the present embodiment.
After the ISP processor 830 receives the color information of the object to be measured captured by the image sensor 814, the image data corresponding to the color information of the object to be measured may be processed. ISP processor 830 analyzes the image data to obtain image statistics that may be used to determine one or more control parameters of imaging device 810. The image sensor 814 may include an array of color filters (e.g., Bayer filters), and the image sensor 814 may acquire light intensity and wavelength information captured with each imaging pixel of the image sensor 814 and provide a set of raw image data that may be processed by the ISP processor 830.
The ISP processor 830 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and ISP processor 830 may perform one or more image processing operations on the raw image data, collecting image statistics about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 830 may also receive pixel data from image memory 820. The image Memory 820 may be a portion of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving the raw image data, ISP processor 830 may perform one or more image processing operations.
After the ISP processor 830 obtains the color information and the depth information of the object to be measured, the color information and the depth information can be fused to obtain a three-dimensional image. The feature of the corresponding object to be measured can be extracted by at least one of an appearance contour extraction method or a contour feature extraction method. For example, the features of the object to be measured are extracted by methods such as an active shape model method ASM, an active appearance model method AAM, a principal component analysis method PCA, and a discrete cosine transform method DCT, which are not limited herein. And then the characteristics of the measured object extracted from the depth information and the characteristics of the measured object extracted from the color information are subjected to registration and characteristic fusion processing. The fusion processing may be a process of directly combining the features extracted from the depth information and the color information, a process of combining the same features in different images after weight setting, or a process of generating a three-dimensional image based on the features after fusion in other fusion modes.
The image data for the three-dimensional image may be sent to the image memory 820 for additional processing before being displayed. ISP processor 830 receives processed data from image memory 820 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. Image data for a three-dimensional image may be output to a display 860 for viewing by a user and/or for further Processing by a Graphics Processing Unit (GPU). Further, the output of the ISP processor 830 may also be sent to the image memory 820, and the display 860 may read image data from the image memory 820. In one embodiment, image memory 820 may be configured to implement one or more frame buffers. In addition, the output of the ISP processor 830 may be transmitted to the encoder/decoder 850 for encoding/decoding the image data. The encoded image data may be saved and decompressed before being displayed on the display 860 device. The encoder/decoder 850 may be implemented by a CPU or GPU or coprocessor.
The image statistics determined by ISP processor 830 may be sent to control logic 840 unit. Control logic 840 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of imaging device 810 based on received image statistics.
The following steps are the steps of implementing the method for generating game materials by using the image processing technology in fig. 9:
step 101', obtaining depth information for creating an avatar of a user in a game, and constructing a 3D model of the user according to the depth information, wherein the depth information is generated after projecting structured light to the user.
In step 102', contour information for each body part of the user is identified from the 3D model.
In step 103', for each body part, a target material corresponding to the body part is generated using the contour information of the body part.
And 104', updating a material library of the game by using the target materials, wherein at least one material of each body part is stored in the material library.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (8)

1. A method of generating game material, comprising:
acquiring depth information used for creating an avatar of a user in a game, and constructing a 3D model of the user according to the depth information, wherein the depth information is generated after structured light is projected to the user;
identifying contour information of each body part of the user from the 3D model;
for each body part, acquiring pre-stored reference contour information of the body part, comparing the contour information with the reference contour information, and generating target materials corresponding to the body part according to the contour information when the difference between the contour information and the reference contour information exceeds a preset threshold value, wherein the threshold value is related to the number of the generated target materials, and the smaller the threshold value is, the more the generated target materials are;
determining a preset range corresponding to the difference according to the difference and the threshold, wherein different differences and thresholds are set to correspond to different preset ranges for materials with different precisions stored in a material library of the game;
acquiring a first material and a standard material which belong to the preset range from the material library;
comparing the target material and the first material with the standard material respectively;
if the difference between the target material and the standard material is smaller than the difference between the first material and the standard material, replacing the first material with the target material and storing the first material into the material library; wherein the material library stores at least one material of each body part.
2. The method of claim 1, wherein said identifying contour information for each body part of the user from the 3D model comprises:
identifying feature points belonging to each body part from the 3D model;
for each body part, constructing the body part and acquiring contour information of the body part based on depth information of feature points of the body part.
3. The method according to claim 1, wherein the obtaining of the pre-stored reference contour information of the body part comprises:
identifying a type corresponding to the body part;
and acquiring the reference contour information corresponding to the type.
4. The method of any of claims 1-3, wherein said obtaining the depth information for creating an avatar of a user in a game comprises:
projecting structured light towards the user;
collecting reflected light formed on the user's body and forming the depth information.
5. A method according to any one of claims 1-3, wherein the structured light is non-uniform structured light, which is a speckle pattern or a random dot pattern consisting of a collection of a plurality of light spots, formed by a diffractive optical element arranged in a projection device on the terminal, wherein the diffractive optical element is provided with a number of reliefs having different groove depths.
6. An apparatus for generating game material, comprising:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring depth information used for creating an avatar of a user in a game and constructing a 3D model of the user according to the depth information, and the depth information is generated after structured light is projected to the user;
an identification module for identifying contour information of each body part of the user from the 3D model;
the generating module is used for acquiring pre-stored reference contour information of the body part for each body part, comparing the contour information with the reference contour information, and generating target materials corresponding to the body part according to the contour information when the difference between the contour information and the reference contour information exceeds a preset threshold value, wherein the threshold value is related to the number of the generated target materials, and the smaller the threshold value is, the more the generated target materials are;
the updating module is used for determining a preset range corresponding to the difference according to the difference and the threshold, wherein different differences and thresholds are set to correspond to different preset ranges aiming at materials with different precisions stored in a material library of the game; acquiring a first material and a standard material which belong to the preset range from the material library; comparing the target material and the first material with the standard material respectively; and if the difference between the target material and the standard material is smaller than that between the first material and the standard material, replacing the first material with the target material and storing the first material into the material library.
7. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform a method of generating game material as recited in any of claims 1-5.
8. A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions that, when executed by the processor, cause the processor to perform a method of generating game material according to any one of claims 1 to 5.
CN201710676383.8A 2017-08-09 2017-08-09 Method and device for generating game material Active CN107622522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710676383.8A CN107622522B (en) 2017-08-09 2017-08-09 Method and device for generating game material

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710676383.8A CN107622522B (en) 2017-08-09 2017-08-09 Method and device for generating game material

Publications (2)

Publication Number Publication Date
CN107622522A CN107622522A (en) 2018-01-23
CN107622522B true CN107622522B (en) 2021-02-05

Family

ID=61088109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710676383.8A Active CN107622522B (en) 2017-08-09 2017-08-09 Method and device for generating game material

Country Status (1)

Country Link
CN (1) CN107622522B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898650B (en) * 2018-06-15 2022-11-15 Oppo广东移动通信有限公司 Human-shaped material creating method and related device
CN111598579B (en) * 2019-02-02 2023-05-02 阿里巴巴集团控股有限公司 Commodity content processing method, commodity content processing device, storage medium and processor

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101504774A (en) * 2009-03-06 2009-08-12 暨南大学 Animation design engine based on virtual reality
CN104679831A (en) * 2015-02-04 2015-06-03 腾讯科技(深圳)有限公司 Method and device for matching human model
CN104899825A (en) * 2014-03-06 2015-09-09 腾讯科技(深圳)有限公司 Method and device for modeling picture figure

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9779508B2 (en) * 2014-03-26 2017-10-03 Microsoft Technology Licensing, Llc Real-time three-dimensional reconstruction of a scene from a single camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101504774A (en) * 2009-03-06 2009-08-12 暨南大学 Animation design engine based on virtual reality
CN104899825A (en) * 2014-03-06 2015-09-09 腾讯科技(深圳)有限公司 Method and device for modeling picture figure
CN104679831A (en) * 2015-02-04 2015-06-03 腾讯科技(深圳)有限公司 Method and device for matching human model

Also Published As

Publication number Publication date
CN107622522A (en) 2018-01-23

Similar Documents

Publication Publication Date Title
CN107481304B (en) Method and device for constructing virtual image in game scene
CN107480613B (en) Face recognition method and device, mobile terminal and computer readable storage medium
CN107452034B (en) Image processing method and device
CN107479801B (en) Terminal display method and device based on user expression and terminal
CN107368730B (en) Unlocking verification method and device
CN107563304B (en) Terminal equipment unlocking method and device and terminal equipment
US11503228B2 (en) Image processing method, image processing apparatus and computer readable storage medium
CN107592449B (en) Three-dimensional model establishing method and device and mobile terminal
CN107734267B (en) Image processing method and device
CN107463659B (en) Object searching method and device
CN107481101B (en) Dressing recommendation method and device
KR101055411B1 (en) Method and apparatus of generating stereoscopic image
CN107465906B (en) Panorama shooting method, device and the terminal device of scene
CN107564050B (en) Control method and device based on structured light and terminal equipment
CN107491744B (en) Human body identity recognition method and device, mobile terminal and storage medium
CN107493428A (en) Filming control method and device
CN107610080B (en) Image processing method and apparatus, electronic apparatus, and computer-readable storage medium
CN107610171B (en) Image processing method and device
CN107480615B (en) Beauty treatment method and device and mobile equipment
CN107491675B (en) Information security processing method and device and terminal
CN107734264B (en) Image processing method and device
CN107705278B (en) Dynamic effect adding method and terminal equipment
CN107438161A (en) Shooting picture processing method, device and terminal
CN107592491B (en) Video communication background display method and device
CN107613239B (en) Video communication background display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant