CN111462309A - Three-dimensional human head modeling method and device, terminal equipment and storage medium - Google Patents

Three-dimensional human head modeling method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN111462309A
CN111462309A CN202010243821.3A CN202010243821A CN111462309A CN 111462309 A CN111462309 A CN 111462309A CN 202010243821 A CN202010243821 A CN 202010243821A CN 111462309 A CN111462309 A CN 111462309A
Authority
CN
China
Prior art keywords
human head
gray code
target user
code pattern
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010243821.3A
Other languages
Chinese (zh)
Other versions
CN111462309B (en
Inventor
王心君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen New Mirror Media Network Co ltd
Original Assignee
Shenzhen New Mirror Media Network Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen New Mirror Media Network Co ltd filed Critical Shenzhen New Mirror Media Network Co ltd
Priority to CN202010243821.3A priority Critical patent/CN111462309B/en
Publication of CN111462309A publication Critical patent/CN111462309A/en
Application granted granted Critical
Publication of CN111462309B publication Critical patent/CN111462309B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The application is applicable to the technical field of computers, and provides a three-dimensional human head modeling method, a three-dimensional human head modeling device, a terminal device and a storage medium, wherein the three-dimensional human head modeling device comprises the following steps: acquiring a texture surface corresponding to a target user in a plurality of preset directions and RGB images corresponding to the texture surface and the RGB images, wherein the RGB image corresponding to each preset direction comprises a color value of each coordinate point in the head point cloud corresponding to the preset direction; splicing the construction surfaces in a plurality of preset directions to obtain an initial human head model of a target user; and according to the color values on the RGB images, carrying out color filling on the initial human head model to obtain the three-dimensional human head model of the target user. The model incompleteness caused by model missing points or flying points is avoided, the model precision is improved, and the display effect of the three-dimensional human head model is better.

Description

Three-dimensional human head modeling method and device, terminal equipment and storage medium
Technical Field
The application belongs to the technical field of computers, and particularly relates to a three-dimensional human head modeling method, a three-dimensional human head modeling device, a terminal device and a storage medium.
Background
With the development of technology, three-dimensional human heads have very wide application, such as trying on glasses based on the three-dimensional human heads. At present, the three-dimensional human head mainly obtains image information of the human head at different angles by rotating the human head or rotating a camera, and a human head model is established according to the image information, however, the human head model obtained by establishing the human head model is possibly incomplete due to missing of the image information of partial angles, and the model precision is poor.
Disclosure of Invention
The embodiment of the application provides a three-dimensional human head modeling method, a three-dimensional human head modeling device, terminal equipment and a storage medium, and can solve the problem of poor precision of a three-dimensional human head model.
In a first aspect, an embodiment of the present application provides a method for modeling a three-dimensional human head, including:
acquiring construction surfaces and RGB images respectively corresponding to a target user in a plurality of preset directions, wherein the construction surfaces are formed by head point clouds of the target user, and the RGB image corresponding to each preset direction comprises color values of all coordinate points in the head point clouds corresponding to the preset direction;
splicing the construction surfaces in a plurality of preset directions to obtain an initial human head model of a target user;
and according to the color values corresponding to the coordinate points in the human head point cloud on the RGB image, carrying out color filling on the initial human head model to obtain the three-dimensional human head model of the target user.
According to the embodiment of the application, the position of the human head in a three-dimensional space is determined by acquiring the human head point cloud and the RGB image of the structure surface corresponding to the target user in a plurality of preset directions respectively, so that the spatial positioning of the three-dimensional human head is realized, the model incompleteness caused by model missing points or flying points is avoided, and the model precision is improved; the method comprises the steps of splicing the construction surfaces in the preset directions to obtain an initial human head model of a target user, carrying out color filling on the initial human head model according to color values of pixel points on RGB images to obtain a three-dimensional human head model with RGB information, and enabling the display effect of the three-dimensional human head model to be better.
In a second aspect, an embodiment of the present application provides a three-dimensional human head modeling apparatus, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring configuration surfaces respectively corresponding to a target user in a plurality of preset directions and RGB images respectively corresponding to the configuration surfaces, the configuration surfaces are formed by head point clouds of the target user, and the RGB image corresponding to each preset direction comprises color values of all coordinate points in the head point clouds corresponding to the preset direction;
the splicing module is used for splicing the construction surfaces in the preset directions to obtain an initial human head model of a target user;
and the filling module is used for filling colors into the initial human head model according to the color values corresponding to the coordinate points in the human head point cloud on the RGB image to obtain the three-dimensional human head model of the target user.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor, when executing the computer program, implements the method for modeling a three-dimensional human head according to any one of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method for modeling a three-dimensional human head according to any one of the first aspect.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the method for modeling a three-dimensional human head according to any one of the first aspect.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic flow chart diagram of a method for modeling a three-dimensional human head according to an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of a method for modeling a three-dimensional human head according to another embodiment of the present application;
FIG. 3 is a schematic flow chart diagram of a method for modeling a three-dimensional human head according to another embodiment of the present application;
FIG. 4 is a schematic diagram of a Gray code pattern provided by an embodiment of the present application;
FIG. 5 is a schematic view of an acquisition assembly provided in another embodiment of the present application;
FIG. 6 is a schematic structural diagram of a modeling apparatus for a three-dimensional human head provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
As described in the related art, the current three-dimensional human head is mainly modeled by acquiring head information through a rotating camera, but the accuracy of the model thus established is poor. The head information of different angles is obtained by rotating the human head or rotating the camera, and the human head model is established according to the head information, so that not only is a large amount of time spent on obtaining the head information needed, but also the problem of missing points or flying points usually exists in the established model, and the parts of the model, such as the hindbrain, the vertex and the like, are incomplete.
Therefore, the method for modeling the three-dimensional human head determines the position of the human head in the three-dimensional space, realizes the space positioning of the three-dimensional human head, avoids the model incompleteness caused by model missing points or flying points, and improves the model precision; and carrying out color filling on the initial human head model to obtain the three-dimensional human head model with RGB information, so that the display effect of the three-dimensional human head model is better.
Fig. 1 shows a schematic flowchart of a modeling method for a three-dimensional human head provided in the present application, which may be applied to terminal devices including, but not limited to, a mobile phone, a tablet computer, a wearable device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, a super-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), a server, and the like, by way of example and not limitation, and the present application embodiment does not set any limit to specific types of the terminal devices.
S101, acquiring configuration surfaces respectively corresponding to a target user in a plurality of preset directions and RGB images respectively corresponding to the configuration surfaces, wherein the configuration surfaces are formed by head point clouds of the target user, and the RGB image corresponding to each preset direction comprises color values of all coordinate points in the head point clouds corresponding to the preset direction;
in the above S101, the preset direction is a direction that can ensure complete positioning of the head of the target user, and the face of the target user is taken as a front, and the preset direction may include a front, a top, a left and a right. It is to be understood that in other embodiments, the plurality of preset directions may also be a combination of directions thereof. The structural surface is a surface formed by a plurality of adjacent coordinates in the human head point cloud, such as a triangular surface formed by three adjacent coordinate points and a four-corner surface formed by four adjacent coordinate points. The human head point cloud is a coordinate point set formed by coordinate points of human head feature points of the target user in a three-dimensional space, and the human head point cloud in each preset direction at least corresponds to one RGB image.
In an embodiment, the head point cloud of the head of the target user can be acquired through the three-dimensional scanner in a plurality of preset directions, and the RGB image of the head of the target user is acquired through the camera. In another embodiment, a projector and a camera with fixed positions may be installed in a plurality of preset directions, the projector projects a continuous gray code pattern (fig. 4 is a gray code pattern) to the head of a target user, and collects a head image with the gray code pattern through a plurality of cameras, determines coordinate points of head feature points corresponding to each pixel point on the gray code pattern in a three-dimensional space according to the deformation condition of the gray code pattern, forms a head point cloud according to a plurality of coordinate points, and collects a head image with RGB information through the camera in each preset direction.
Furthermore, because the gray code pattern and the RGB pattern are acquired by the camera with the same resolution and fixed position, the gray code pattern is the same as the pixel point of the RGB image, and the neighborhood relationship of the pixel point is saved. And determining the coordinate points of the head point clouds corresponding to the pixel points according to the gray code patterns, namely, the pixel points on the RGB images and the coordinate points of the head point clouds have a corresponding relation, thereby determining the neighborhood relation between the coordinate points. Therefore, in order to reduce the calculation amount, the subsequent construction of the triangular surface of the initial human head model and the color filling of the initial human head model are facilitated, and the corresponding relation between the pixel points of the RGB image and the coordinate points of the human head point cloud is stored.
In the embodiment, the human head point clouds in a plurality of preset directions are obtained, so that the human head characteristic points are positioned in a three-dimensional space, and the problem of model missing points is avoided; the three-dimensional head has color information by acquiring the RGB image, and has better display effect.
S102, splicing the construction surfaces in a plurality of preset directions to obtain an initial human head model of a target user;
in S102, the stitching is performed by connecting coordinate points of a plurality of facets on the boundary, which includes but is not limited to coordinate point connection, coordinate point deduplication, and coordinate point fusion. The initial human head model is a grid formed by splicing a plurality of human head point clouds into a complete human head point cloud.
Optionally, detecting whether a coordinate point with a coincident position exists between a coordinate point on a human head point cloud boundary in one preset direction and a coordinate point on a human head point cloud boundary in another adjacent preset direction, if so, performing mean value calculation on a plurality of coordinate points with the coincident position to obtain a mean value coordinate point, deleting the plurality of coordinate points with the coincident position, and taking the mean value coordinate point as a new coordinate point on a corresponding position; and if no coordinate point with overlapped position exists, connecting the adjacent coordinate points on the two boundaries. It should be understood that, in the embodiment, the position is determined to be coincident if the distance between two or more coordinate points is within a preset distance range.
According to the embodiment, the head point clouds are spliced, so that coincident coordinate points are removed, the coordinate points are fused, the flying spot problem of the model is reduced, and the model precision is improved.
S103, according to the color values of the RGB images, color filling is carried out on the initial human head model, and the three-dimensional human head model of the target user is obtained.
In the above S103, obtaining a color value of each coordinate point on the initial head model according to a corresponding relationship between a pixel point on the RGB image and a coordinate point in the head point cloud; and filling the initial human head model according to the color value of each coordinate point on the initial human head model to obtain the three-dimensional human head model of the target user.
Optionally, if the gray code patterns of the human head are collected by a plurality of cameras, the color filling may be performed on the coordinate points on the initial human head model according to the coordinate points in the three-dimensional space of the human head feature points corresponding to each pixel point on the gray code patterns determined in step S101 and the corresponding relationship between the pixel points of the gray code patterns and the pixel points of the RGB image.
According to the embodiment, the color value of the coordinate point of the initial human head model is directly determined according to the corresponding relation between the pixel point on the RGB image and the coordinate point of the human head point cloud, and the calculation amount of a computer is reduced without complicated calculation of the position relation.
Fig. 2 shows a schematic flow chart of another three-dimensional human head modeling method provided in the embodiment of the present application. It should be noted that, the steps identical to those in fig. 1 are not described herein again.
In a possible implementation manner, the foregoing S101 includes S1011 and S1012:
s1011, acquiring human head point clouds and RGB images respectively corresponding to a target user in a plurality of preset directions, wherein the human head point clouds comprise a plurality of coordinate points with neighborhood relations;
and S1012, constructing a plurality of construction surfaces corresponding to each preset direction according to the neighborhood relationship of the coordinate points of the human head point cloud in each preset direction.
In S1011 and S1012, the neighborhood relationship is a position neighboring relationship between the coordinate point and other coordinates, and the neighborhood relationship of each coordinate point may be determined according to the coordinates of all coordinate points. The configuration surface may be a triangular surface configured by three adjacent coordinate points, or may be a four-corner surface configured by four adjacent coordinate points.
Optionally, the step S1011 includes steps S201 to S203:
s201, collecting Gray code patterns projected on the head of a target user from a plurality of preset directions and acquiring RGB images of the head of the target user;
in S201, the gray code pattern may be projected onto the head of the target user by the projector, and the gray code pattern and the RGB image on the head of the target user may be collected by the camera. Optionally, the projector is moved according to a preset movement track, the projector projects a gray code pattern during the movement, and the camera moves according to the same movement track as the projector along with the movement of the projector and collects the gray code pattern and the RGB graph.
Optionally, the S201 specifically includes S2011 to S2013:
s2011, projecting preset continuous Gray code patterns to the head of a target user from a plurality of preset directions;
s2012, in each preset direction, two camera devices are used for respectively acquiring continuous gray code patterns on the head of the target user, wherein the two camera devices are respectively arranged at two sides of the projection direction of the gray code patterns;
s2013, collecting RGB images of the head of the target user in each preset direction.
In S2011 to S2013, the consecutive gray code patterns represent a plurality of gray code patterns arranged in time series. In order to avoid the problems of model missing points and flying points, the precision of the model is improved. As shown in fig. 5, the projector 01 and the camera 02 and the camera 03 are combined into one acquisition assembly, wherein the projector is at the midpoint of the line connecting the two cameras. And the acquisition assembly is fixedly installed in each preset direction, and is respectively installed right in front of, right above, right left and right of the head. In each preset direction, a projector puts in continuous gray code patterns, and two cameras acquire the gray code patterns and RGB images.
Further, in order to ensure that the gray code pattern projected by the projector is collected by the camera, after delaying for a preset time, the projector is controlled to stop projecting the gray code pattern, and the camera is controlled to collect the RGB image of the human head.
In the embodiment, the gray code patterns are put in and collected in a plurality of preset directions, so that each angle of the head of a person is completely covered, and the problems of model missing points and flying points are avoided.
S202, determining a coordinate point of the human head in a three-dimensional space according to the Gray code pattern in each preset direction;
in the above S202, the gray code pattern belongs to a time domain coding pattern, when a continuous gray code pattern is projected onto the surface of the object, the gray code pattern is deformed due to a depth change of the surface of the object, and although the gray code pattern is deformed, a position number obtained by decoding a gray code corresponding to a pixel position on the gray code pattern is not changed, if a position number of a certain pixel point on the pattern before projection is jth row i, a gray code pattern including a gray code a corresponding to the position number is obtained by coding, the gray code pattern is projected onto a human head, the gray code pattern is deformed on the human head due to unevenness of the human head, the camera collects the deformed gray code pattern (i.e. the gray code pattern collected by the camera is different from the gray code pattern before projection), the deformed gray code pattern is decoded to obtain jth row i and jth row of the position number corresponding to the gray code a, therefore, the position of the pixel point with the position number of the ith row and the jth column on the deformed Gray code pattern can be determined. Therefore, the coordinate point of the corresponding pixel point in the three-dimensional space is determined according to the gray code deformation conditions of two or more gray code patterns.
Optionally, for each preset direction, the gray code pattern acquired by one image capturing device is used as a first gray code pattern, and the gray code pattern acquired by another image capturing device is used as a second gray code pattern, where S202 specifically includes S2021 to S2023:
s2021, determining, for a first gray code pattern and a second gray code pattern in each preset direction, each corresponding group of homonymous pixels in the preset direction, where each group of homonymous pixels includes a first pixel in the first gray code pattern and a second pixel in the second gray code pattern;
s2022, for each homonymous pixel point group in each preset direction, determining an intersection point of a first connecting line and a second connecting line in a three-dimensional space, wherein the first connecting line is a connecting line between an optical center of the camera device for collecting the first Gray code pattern and a first pixel point in the homonymous pixel point group, and the second connecting line is a connecting line between an optical center of the camera device for collecting the second Gray code pattern and a second pixel point in the homonymous pixel point group;
and S2023, taking each obtained intersection point as a coordinate point of the human head in the three-dimensional space.
In the above-described S2021 to S2023, the internal and external parameters of the camera and the position orientation matrix of each camera may be corrected by the zhangyoutiao calibration method, and the coordinate point may be determined by the triangulation method. Specifically, according to a connection line between the optical center of one camera in the three-dimensional space and a pixel point p in an imaging plane of the camera, and a connection line between the optical center of the other camera and a pixel point q in the imaging plane of the camera, the two connection lines are intersected in the three-dimensional space or a coordinate point of the pixel point in the three-dimensional space is obtained by using a nearest intersection method, wherein the pixel point p and the pixel point q are homonymous pixel points, namely the pixel points captured by the two cameras by the same light point projected by the projector. Further, according to each pixel point group with the same name, three-dimensional coordinate points corresponding to all the pixel points are iteratively solved, and therefore three-dimensional human head point clouds in each preset direction are formed.
Further, the above S2021 may further include S20211 to S20213:
s20211, reversely encoding the first gray code pattern and the second gray code pattern in each preset direction to obtain a first position number of a pixel on the first gray code pattern and a second position number of a pixel on the second gray code pattern;
s20212, for each preset direction, matching the first position number with the second position number;
s20213, using the pixel points corresponding to the first position number and the second position number with the same number as the first pixel point and the second pixel point, to form a group of the pixel points with the same name in the preset direction.
In the above S20211 to S20213, assuming that the resolution of the projector is a × b, each pixel of the pattern projected by the projector is converted into a gray code, for the pixel whose position numbers are i-th row and j-th column, the decimal i indicating the row number position is converted into an n-bit binary form, the decimal j indicating the column number position is converted into an m-bit binary form, the n-bit binary form is converted into a gray code form, and the m-bit binary form is converted into a gray code form, thereby obtaining the gray code of all the pixels of the pattern.
When the gray code patterns are collected by the two cameras, due to the difference of the collection angles of the cameras, the first gray code patterns and the second gray code patterns collected by the two cameras are different, but the gray codes on each pixel point of the gray code patterns are unchanged, so that the gray codes of each pixel point in the gray code patterns are decoded into position numbers through reverse coding, and the position numbers are the positions of the pixel points on the gray code patterns projected by the projector. When the position number on the first gray code pattern is consistent with the position number on the second gray code pattern, it is indicated that the pixel points on the first gray code pattern and the second gray code pattern corresponding to the position number are homonymous pixel points.
And S203, forming a human head point cloud in each preset direction according to the plurality of coordinate points.
In step S203, the three adjacent coordinate points are connected to construct a triangular surface according to the neighborhood relationship of the coordinate points, so as to obtain a human head point cloud corresponding to each preset direction.
Fig. 3 is a schematic flow chart of another three-dimensional human head modeling method provided in the embodiment of the present application. It should be noted that the same steps as those in the embodiment of fig. 1 and 2 are not described herein again.
In a possible implementation manner, the foregoing S102 specifically includes S301 to S303:
s301, establishing a space body surrounding all the human head point clouds in each preset direction, and dividing the space body into a plurality of sub-space bodies with preset sizes;
s302, if a plurality of coordinate points of the human head point cloud exist in the subspace body, calculating a mean value coordinate point of the coordinate points in the subspace body, and taking the mean value coordinate point as the coordinate point of the subspace body;
s303, updating the structural surface in each preset direction according to the neighborhood relation of the mean coordinate points;
s304, connecting the coordinate points of the configuration surface in each preset direction to obtain an initial human head model of the target user.
In S301 to S304, the spatial volume may be a cube, a sphere, a cylinder, or the like, preferably a cube, which is more convenient for dividing into sub-spatial volumes with uniform size. Specifically, a cube is defined in a world coordinate system, and the cube is cut into a plurality of small cubes with the same size according to a preset resolution; the number of coordinate points in a cube is detected, and if a plurality of coordinate points exist in a cube, a mean coordinate point of the plurality of coordinate points is obtained, and the mean coordinate point is used as a coordinate point in the cube, and the neighborhood relationship of the coordinate points is updated, so that the triangular face index (i.e., the above-mentioned face) of the model is updated.
Optionally, since the position orientation of the acquisition assemblies in the preset direction is relatively fixed, the coordinate positions of the human head point clouds obtained by the respective acquisition assemblies are already aligned, operations such as translation and rotation are not required, and the calculation amount of a computer is reduced. And processing the coincident coordinate points or the approximately coincident coordinate points through the microcubes, wherein the approximately coincident coordinate points are the coordinate points of which the distance between the coordinate points is less than a preset range value.
Optionally, the upper preset range value is used as the preset size of the subspace body.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 6 shows a structural block diagram of a three-dimensional human head modeling apparatus 600 provided in the embodiment of the present application, corresponding to the three-dimensional human head modeling method described in the above embodiment, and only the parts related to the embodiment of the present application are shown for convenience of explanation.
Referring to fig. 6, the apparatus includes:
an obtaining module 601, configured to obtain configuration surfaces corresponding to a target user in multiple preset directions and RGB images corresponding to the configuration surfaces, respectively, where the configuration surfaces are formed by head point clouds of the target user, and the RGB image corresponding to each preset direction includes color values of coordinate points in the head point clouds corresponding to the preset direction;
the splicing module 602 is configured to splice a plurality of constructed surfaces in preset directions to obtain an initial human head model of a target user;
and the filling module 603 is configured to perform color filling on the initial human head model according to color values corresponding to the coordinate points in the human head point cloud on the RGB image, so as to obtain a three-dimensional human head model of the target user.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 7, the terminal device 7 of this embodiment includes: at least one processor 70 (only one shown in fig. 7), a memory 71, and a computer program 72 stored in the memory 71 and executable on the at least one processor 70, the processor 70 implementing the steps of any of the method embodiments described above when executing the computer program 72.
The terminal device 7 may be a mobile phone, a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal equipment may include, but is not limited to, a processor 70, a memory 71. Those skilled in the art will appreciate that fig. 7 is only an example of the terminal device 7, and does not constitute a limitation to the terminal device 7, and may include more or less components than those shown, or combine some components, or different components, for example, and may further include input/output devices, network access devices, and the like.
The Processor 70 may be a Central Processing Unit (CPU), and the Processor 70 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may in some embodiments be an internal storage unit of the terminal device 7, such as a hard disk or a memory of the terminal device 7, the memory 71 may in other embodiments also be an external storage device of the terminal device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like provided on the terminal device 7 further, the memory 71 may also comprise both an internal storage unit and an external storage device of the terminal device 7, the memory 71 is used for storing an operating system, applications, a Boot loader (Boot L loader), data and other programs, such as program codes of the computer program or the like, the memory 71 may also be used for temporarily storing data that has been or will be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), random-access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method of modeling a three-dimensional human head, comprising:
acquiring construction surfaces respectively corresponding to a target user in a plurality of preset directions and RGB images respectively corresponding to the construction surfaces, wherein the construction surfaces are formed by head point clouds of the target user, and the RGB image corresponding to each preset direction comprises color values of all coordinate points in the head point clouds corresponding to the preset direction;
splicing the configuration surfaces in the preset directions to obtain an initial human head model of the target user;
and according to the color values corresponding to the coordinate points in the human head point cloud on the RGB image, carrying out color filling on the initial human head model to obtain the three-dimensional human head model of the target user.
2. The modeling method of claim 1, wherein the obtaining of the texture and the RGB image respectively corresponding to the target user in the plurality of preset directions comprises:
acquiring human head point clouds respectively corresponding to a target user in a plurality of preset directions and RGB images respectively corresponding to the target user, wherein the human head point clouds comprise a plurality of coordinate points with neighborhood relations;
and constructing a plurality of construction surfaces corresponding to each preset direction according to the neighborhood relation of the coordinate points of the human head point cloud in each preset direction.
3. The modeling method of claim 2, wherein the obtaining of the human head point cloud and the RGB image respectively corresponding to the target user in the plurality of preset directions comprises:
collecting Gray code patterns projected on the head of the target user from a plurality of preset directions and acquiring RGB images of the head of the target user;
determining a coordinate point of the human head in a three-dimensional space according to the Gray code pattern in each preset direction;
and determining the neighborhood relation of the coordinate points, and forming the coordinate points into the head point cloud in each preset direction.
4. A modeling method in accordance with claim 3 wherein said acquiring a gray code pattern projected onto the head of the target user and acquiring RGB images of the head of the target user from a plurality of said predetermined directions comprises:
projecting preset continuous Gray code patterns from a plurality of preset directions to the head of the target user;
in each preset direction, two camera devices are used for respectively acquiring continuous gray code patterns on the head of the target user, wherein the two camera devices are respectively arranged on two sides of the projection direction of the gray code patterns;
and collecting the RGB images of the head of the target user in each preset direction.
5. A modeling method according to claim 4, characterized in that for each preset direction, the Gray code pattern acquired by one of the camera devices is taken as a first Gray code pattern, and the Gray code pattern acquired by the other camera device is taken as a second Gray code pattern;
the determining a coordinate point of the human head in a three-dimensional space according to the gray code pattern in each preset direction includes:
determining each homonymous pixel point group corresponding to each preset direction aiming at a first Gray code pattern and a second Gray code pattern in each preset direction, wherein each homonymous pixel point group comprises a first pixel point positioned in the first Gray code pattern and a second pixel point positioned in the second Gray code pattern;
for each homonymous pixel point group in each preset direction, determining an intersection point of a first connecting line and a second connecting line in a three-dimensional space, wherein the first connecting line is a connecting line between an optical center of the camera device for collecting the first Gray code pattern and the first pixel point in the homonymous pixel point group, and the second connecting line is a connecting line between the optical center of the camera device for collecting the second Gray code pattern and the second pixel point in the homonymous pixel point group;
and taking each obtained intersection point as a coordinate point of the human head in a three-dimensional space.
6. The modeling method according to claim 5, wherein the determining, for the first gray code pattern and the second gray code pattern in each preset direction, each group of corresponding pixels in the preset direction includes:
reversely encoding the first gray code pattern and the second gray code pattern aiming at the first gray code pattern and the second gray code pattern in each preset direction to obtain a first position number of a pixel point on the first gray code pattern and a second position number of a pixel point on the second gray code pattern;
for each preset direction, matching the first position number with the second position number;
and taking pixel points respectively corresponding to the first position number and the second position number with consistent numbers as the first pixel point and the second pixel point to form a pixel point group with the same name in the preset direction.
7. The modeling method according to any one of claims 1 to 6, wherein said stitching said configuration surfaces in a plurality of said preset directions to obtain an initial human head model of said target user comprises:
establishing a space body surrounding all the human head point clouds in all the preset directions, and dividing the space body into a plurality of sub-space bodies with preset sizes;
if a plurality of coordinate points of the human head point cloud exist in the subspace body, calculating a mean value coordinate point of the plurality of coordinate points in the subspace body, and taking the mean value coordinate point as the coordinate point of the subspace body;
updating the structural surface in each preset direction according to the neighborhood relation of the mean coordinate point;
and connecting the coordinate points of the configuration surface in each preset direction to obtain an initial human head model of the target user.
8. An apparatus for modeling a three-dimensional human head, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a constructed surface and an RGB (red, green and blue) image which respectively correspond to a target user in a plurality of preset directions, the constructed surface is formed by head point clouds of the target user, and the RGB image corresponding to each preset direction comprises a color value of each coordinate point in the head point clouds corresponding to the preset direction;
the splicing module is used for splicing the construction surfaces in the preset directions to obtain an initial human head model of the target user;
and the filling module is used for filling colors of the initial human head model according to the color values corresponding to the coordinate points in the human head point cloud on the RGB image to obtain the three-dimensional human head model of the target user.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202010243821.3A 2020-03-31 2020-03-31 Modeling method and device for three-dimensional head, terminal equipment and storage medium Active CN111462309B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010243821.3A CN111462309B (en) 2020-03-31 2020-03-31 Modeling method and device for three-dimensional head, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010243821.3A CN111462309B (en) 2020-03-31 2020-03-31 Modeling method and device for three-dimensional head, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111462309A true CN111462309A (en) 2020-07-28
CN111462309B CN111462309B (en) 2023-12-19

Family

ID=71683476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010243821.3A Active CN111462309B (en) 2020-03-31 2020-03-31 Modeling method and device for three-dimensional head, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111462309B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120176380A1 (en) * 2011-01-11 2012-07-12 Sen Wang Forming 3d models using periodic illumination patterns
CN106164979A (en) * 2015-07-13 2016-11-23 深圳大学 A kind of three-dimensional facial reconstruction method and system
CN109697688A (en) * 2017-10-20 2019-04-30 虹软科技股份有限公司 A kind of method and apparatus for image procossing
CN110443885A (en) * 2019-07-18 2019-11-12 西北工业大学 Three-dimensional number of people face model reconstruction method based on random facial image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120176380A1 (en) * 2011-01-11 2012-07-12 Sen Wang Forming 3d models using periodic illumination patterns
CN106164979A (en) * 2015-07-13 2016-11-23 深圳大学 A kind of three-dimensional facial reconstruction method and system
CN109697688A (en) * 2017-10-20 2019-04-30 虹软科技股份有限公司 A kind of method and apparatus for image procossing
CN110443885A (en) * 2019-07-18 2019-11-12 西北工业大学 Three-dimensional number of people face model reconstruction method based on random facial image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘绍堂著: "《隧道变形监测与预测的理论与方法》", 黄河水利出版社, pages: 48 - 51 *
朱险峰;侯贺;韩玉川;白云瑞;吴植文;: "人体三维扫描结构光解相位方法研究", no. 03 *

Also Published As

Publication number Publication date
CN111462309B (en) 2023-12-19

Similar Documents

Publication Publication Date Title
EP3614340B1 (en) Methods and devices for acquiring 3d face, and computer readable storage media
CN107223269B (en) Three-dimensional scene positioning method and device
US11842438B2 (en) Method and terminal device for determining occluded area of virtual object
US10726580B2 (en) Method and device for calibration
CN111145238A (en) Three-dimensional reconstruction method and device of monocular endoscope image and terminal equipment
CN112348863B (en) Image alignment method, image alignment device and terminal equipment
CN111951376B (en) Three-dimensional object reconstruction method fusing structural light and photometry and terminal equipment
CN111640180B (en) Three-dimensional reconstruction method and device and terminal equipment
CN111815754A (en) Three-dimensional information determination method, three-dimensional information determination device and terminal equipment
CN109979013B (en) Three-dimensional face mapping method and terminal equipment
CN104794748A (en) Three-dimensional space map construction method based on Kinect vision technology
CN111583381B (en) Game resource map rendering method and device and electronic equipment
US11380016B2 (en) Fisheye camera calibration system, method and electronic device
CN113362446B (en) Method and device for reconstructing object based on point cloud data
WO2023093739A1 (en) Multi-view three-dimensional reconstruction method
CN115439607A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN112686950A (en) Pose estimation method and device, terminal equipment and computer readable storage medium
CN111023994B (en) Grating three-dimensional scanning method and system based on multiple measurement
CN111460937A (en) Face feature point positioning method and device, terminal equipment and storage medium
CN112102378A (en) Image registration method and device, terminal equipment and computer readable storage medium
CN114066930A (en) Planar target tracking method and device, terminal equipment and storage medium
CN113362445B (en) Method and device for reconstructing object based on point cloud data
CN115294277B (en) Three-dimensional reconstruction method and device of object, electronic equipment and storage medium
CN111462309B (en) Modeling method and device for three-dimensional head, terminal equipment and storage medium
CN111107307A (en) Video fusion method, system, terminal and medium based on homography transformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant