CN117611752B - Method and system for generating 3D model of digital person - Google Patents

Method and system for generating 3D model of digital person Download PDF

Info

Publication number
CN117611752B
CN117611752B CN202410085534.2A CN202410085534A CN117611752B CN 117611752 B CN117611752 B CN 117611752B CN 202410085534 A CN202410085534 A CN 202410085534A CN 117611752 B CN117611752 B CN 117611752B
Authority
CN
China
Prior art keywords
point
points
feature
characteristic
intersection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410085534.2A
Other languages
Chinese (zh)
Other versions
CN117611752A (en
Inventor
赵策
王亚
屠静
张玥
雷媛媛
孙岩
潘亮亮
刘岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuo Shi Future Chengdu Technology Co ltd
Original Assignee
Zhuo Shi Future Chengdu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuo Shi Future Chengdu Technology Co ltd filed Critical Zhuo Shi Future Chengdu Technology Co ltd
Priority to CN202410085534.2A priority Critical patent/CN117611752B/en
Publication of CN117611752A publication Critical patent/CN117611752A/en
Application granted granted Critical
Publication of CN117611752B publication Critical patent/CN117611752B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of image data processing, in particular to a method and a system for generating a 3D model of a digital person, wherein the method comprises the following steps: acquiring a human gray level image; acquiring characteristic points of the human gray image by adopting a corner detection algorithm; acquiring a point set of feature points; analyzing the line segment distribution characteristics of the intersecting points in the characteristic point set, and constructing the lateral coefficient and the deviation coefficient of the intersecting points; constructing a connection line distribution characteristic coefficient of the intersection point; obtaining an anomaly score of the intersection point according to an anomaly detection algorithm; constructing an abnormality index of the feature points; acquiring a characteristic cluster; analyzing the similarity of feature clusters among different images to construct a first feature point; thereby generating a 3D model of the digital person, and guaranteeing the quality of the 3D model of the digital person while screening meaningless characteristic points.

Description

Method and system for generating 3D model of digital person
Technical Field
The invention relates to the field of image data processing, in particular to a method and a system for generating a 3D model of a digital person.
Background
There are various methods for 3D modeling, with specific choices depending on the application scenario, data availability, and accuracy requirements. Common 3D modeling techniques include: point cloud reconstruction, voxelization, multi-view reconstruction, and the like, wherein multi-view reconstruction is a common 3D modeling technology and is widely applied to computer vision and photogrammetry. The multi-view reconstruction is to restore the three-dimensional structure of the scene by utilizing images of a plurality of view angles through technologies such as matching feature points, camera calibration and the like.
In the real world, all objects around us are three-dimensional, but what we observe through eyes is a two-dimensional image of the object, and three-dimensional information of the observed object needs to be recovered through a human visual system. The multi-view reconstruction is to enable a computer to have the function of a human visual system, and can reconstruct the three-dimensional structure of an object through the shot two-dimensional image information, so that the machine has the capacity of recognizing the world.
Acquiring three-dimensional information from two-dimensional images requires finding a link between the image and the 3D object reconstruction. In order to find the links between images, it is necessary to track some image features, such as corresponding corner points, from one image to another. However, since the number of details of the digital person is large by analyzing the correspondence between all the points in the multi-view, the calculation amount is huge, and the light and shadow distribution generated by the image is different for different photographing angles, the matching effect in some details is very easy to be poor when the matching is performed from the whole angle.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a method and a system for generating a 3D model of a digital person, and the adopted technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for generating a 3D model of a digital person, including the steps of:
acquiring a human gray level image;
acquiring characteristic points of each human gray image by adopting a corner detection algorithm; acquiring a point set of each characteristic point according to the intersecting points of the connecting lines between the characteristic points; obtaining the lateral coefficient of each intersection point according to the left-right distribution difference of the line segment where each intersection point in the feature point set is located; obtaining the deflection coefficient of each intersection point according to the line segment distribution of each intersection point in the feature point set; combining the deflection coefficients of the intersecting points to obtain the connecting line distribution characteristic coefficients of the intersecting points; acquiring abnormal scores of all the intersecting points according to the connection line distribution characteristics of all the intersecting points in the point set of each characteristic point; acquiring an abnormality index of each characteristic point according to the abnormality score of each intersection point; clustering according to the abnormality indexes of the feature points by adopting a clustering algorithm to obtain feature clusters; acquiring value points according to the similarity of feature clusters among different images; acquiring a first feature point according to the value point and the abnormality index of the feature point; and generating a 3D model of the digital person by using PointCNN deep learning according to the first characteristic points.
Preferably, the obtaining each feature point set according to the intersecting point of the connecting lines between the feature points includes:
for each characteristic point of the human gray image;
and connecting each characteristic point with n nearest characteristic points, and taking an intersection point generated by the connection as each characteristic point set, wherein n is a preset value.
Preferably, the obtaining the lateral coefficient of each intersection point according to the left-right distribution difference of the line segment where each intersection point in the feature point set is located specifically includes:
for each intersection in the feature point set;
calculating the average value of the absolute value of the difference between the lengths of the left line segment and the right line segment of all line segments passing through the intersection point; and taking the mean value as an offset coefficient of each intersection point.
Preferably, the obtaining the deflection coefficient of each intersection point according to the line segment distribution of each intersection point in the feature point set includes:
for each intersection in the feature point set;
acquiring the slope of a line segment passing through the intersection point, and calculating a corresponding angle by adopting an arctangent function; taking the difference value between the corresponding angles of the slopes of the adjacent line segments passing through the intersection points as an included angle; calculating the average value of the included angles of all adjacent line segments passing through the intersection point; wherein the corresponding angle of the (i+1) th line segment is larger than the corresponding angle of the (i) th line segment;
when the absolute value of the included angle is smaller than the average value, taking the sum of the opposite number of the included angle and the average value as the deviation difference of each line segment passing through the intersection point;
when the absolute value of the included angle is larger than or equal to the average value, the included angle is used as the deviation difference of all line segments passing through the intersection point;
taking the sum of deviation differences of all line segments passing through the intersection point as an index of an exponential function based on a natural constant; and taking the calculation result of the exponential function as the deflection coefficient of each intersection point.
Preferably, the obtaining the connection line distribution characteristic coefficient of each intersection point by combining the lateral coefficient and the deflection coefficient of each intersection point includes:
calculating the product of the deflection coefficient and the deflection coefficient of each intersection point; acquiring the number of all line segments passing through each intersection point; and taking the ratio of the product to the number as a connecting line distribution characteristic coefficient of each intersection point.
Preferably, the obtaining the abnormal score of each intersecting point according to the connection line distribution characteristics of all intersecting points in the point set of each characteristic point specifically includes:
for each feature point;
arranging the intersecting points in the point set of the characteristic points from small to large according to the distribution characteristics of the connecting lines to obtain a characteristic intersecting line sequence;
and taking the characteristic intersection line sequence as input of an LOF abnormality detection algorithm, wherein the output of the LOF abnormality detection algorithm is an abnormality score of each intersection point in the characteristic intersection line sequence.
Preferably, the obtaining the abnormality index of each feature point according to the abnormality score of each intersection point specifically includes:
for each feature point;
screening the intersection points with the point concentration abnormality score of the feature points being greater than 1, taking the abnormality score of the screened intersection points as the element of the abnormality index sequence of the feature points corresponding to the line segment where the intersection points are located, and taking the sum of the abnormality index sequence elements of each feature point as the abnormality index of each feature point;
if the point set of the feature points has no intersection point with an abnormality score greater than 1, 0 is used as an abnormality index of the feature points.
Preferably, the value point acquiring method includes the specific steps of:
clustering clusters for each feature in the image;
obtaining the minimum value of the DTW distance between each cluster and all clusters in other images; calculating the sum of the minimum value and 1; taking the reciprocal of the sum value as the value quantity of each characteristic cluster;
and taking the feature points in the feature cluster with the value quantity being more than 0.5 after normalization as the value points.
Preferably, the first feature point is specifically a feature point with a value point and an abnormality index of 0.
In a second aspect, an embodiment of the present invention further provides a 3D model generating system for a digital person, including a memory, a processor, and a computer program stored in the memory and running on the processor, where the processor implements any one of the methods described above when executing the computer program.
The invention has at least the following beneficial effects:
the invention mainly distinguishes the value quantity of the characteristic points in each image by combining the distribution side deviation and the deflection of the characteristic points of the human body image and the surrounding characteristic points, simultaneously combines the similar situation of clustering among the images, reveals the screening value information of the characteristic points more deeply, screens out abnormal and non-valuable characteristic points in the point cloud data, only retains the key points related to the digital person, not only ensures more accurate generation of the digital person 3D model, but also ensures the generation quality of the digital person 3D model.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of steps of a method for generating a 3D model of a digital person according to an embodiment of the present invention;
fig. 2 is a flow chart for digital human 3D model generation.
Detailed Description
In order to further describe the technical means and effects adopted by the present invention to achieve the preset purpose, the following detailed description refers to the specific implementation, structure, characteristics and effects of a method and a system for generating a 3D model of a digital person according to the present invention with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of a method and a system for generating a 3D model of a digital person provided by the present invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of steps of a method for generating a 3D model of a digital person according to an embodiment of the present invention is shown, the method includes the following steps:
step S001: and collecting an omnidirectional image of the human body.
The same light source information is adopted to shoot the human body in all directions, so that all parts of the human body can be shot, and the specific information such as clothes, behaviors and appearances is ensured. Thus, N images on the human body are obtained, and the images are converted into gray-scale images. It should be noted that, the implementer can adjust the value of N according to the actual situation, and in this embodiment, the value of N is 500.
Thus, a human body image is acquired.
Step S002: analyzing the line segment distribution characteristics of each intersection point in the point set according to the point set generated by the characteristic point connection of each human body image, and constructing connection line distribution characteristic coefficients of the intersection points; obtaining an abnormal score of each intersection point by adopting an abnormal detection algorithm; constructing an abnormality index of each characteristic point; clustering the abnormal indexes to obtain feature clusters; and evaluating the characteristic points of the digital human 3D model according to the similarity of the characteristic clusters among different human body images.
Firstly, a SIFT algorithm is adopted to obtain feature points of each human body image, wherein the SIFT algorithm is a prior known technology, and the description is omitted in the embodiment. All the characteristic points in each human body image are connected with 200 nearest characteristic points, and as similar connection conditions can be generated for the same parts of the digital human body shot in different images, certain rules can appear on the distances between the intersecting points and the points of the lines, different parts of the digital human body can be distinguished according to the rules, and abnormal characteristic points can be further screened according to the connection difference conditions in different images.
And taking all intersection points of connecting lines between the characteristic point and the nearest 200 characteristic points as a point set of the current characteristic points in the image, and assuming that the number of points in the point set is n1. And constructing a connecting line distribution characteristic coefficient of the p-th intersection point according to the connecting line passing through the point, and representing the distribution condition of the connecting line position of the intersection point.
Specifically, for the different lateral positions of each intersection point passing through the line segment, whether the intersection point is located at the central position of the integral feature point can be represented, and the situation that the distribution of the feature points is uniform is indirectly described, so that the lateral coefficient of the intersection point p is constructed for the position of each line segment passing through the intersection point p according to the embodiment, and the expression is as follows:
in the method, in the process of the invention,side coefficient representing intersection point p, +.>Representing the number of line segments passing through the intersection point p +.>Represents the length of the line segment to the left of the ith line segment passing through the intersection point p, +.>The length of the line segment to the right of the i-th line segment passing through the intersection point p is indicated.
Meanwhile, in an extreme case, if all line segments passing through the point are on the same straight line, the situation has a larger difference from the situation of being uniformly distributed around the point, therefore, the deflection of the line segments passing through the point is also used as one of the distribution characteristics of the point, and the deflection coefficient of the intersection point p is constructed by the expression:
in the method, in the process of the invention,a deflection coefficient representing the intersection point p; />Indicating the deviation difference of the ith line segment passing through the intersection point p; />Representing the number of line segments passing through the intersection point p +.>An exponential function based on a natural constant e is represented,representing an arctangent function for obtaining the angle of the line segment with respect to the horizontal,/>、/>Respectively represent the slope of the ith and (i+1) th line segments passing through the intersection point p, +.>Representing the mean of the angles between all adjacent segments of the intersection point p. Wherein the ranking calculation is performed for a line segment passing through the intersection point p rotated counterclockwise in a direction horizontally rightward from the intersection point p.
And acquiring a connection line distribution characteristic coefficient of the intersection point p by combining the deflection coefficient of the intersection point p, wherein the expression is as follows:
in the method, in the process of the invention,a line distribution characteristic coefficient representing the intersection p, < ->Side coefficient representing intersection point p, +.>Deviation coefficient representing intersection point p +.>Representing the number of line segments passing through the intersection point p.
When the lengths of the line segments on the left side and the right side of the line segment passing through the intersection point p are greatly different, the point is not positioned in the middle of the two characteristic points, the point is relatively leaning to one side, the distribution characteristic rule of the line where the point is positioned is represented, meanwhile, the more the included angle between the line segments passing through the intersection point p is close to the average included angle, the more uniform distribution of the line segments is shown, but when the included angle between the adjacent line segments is larger than and close to the average included angle, the ratio is smaller than and the approximate average included angle is more consistent with the characteristic that the line segments are uniformly distributed, namely, the distribution is approximately uniform and the difference of concentrated distribution is more uniform; when the difference between the included angle of the adjacent line segment and the average included angle is larger, the difference between the included angles of the adjacent line segments is directly amplified, which indicates that the influence of the uneven distribution of the adjacent line segments on the deflection coefficient of the intersection point p is more serious. When the deviation coefficient of the intersection point p is larger and the deviation coefficient of each line segment passing through the intersection point p is larger, the distribution characteristic coefficient of the connecting line of the intersection point p is larger, namely the point is provided with the distribution condition that the distribution position deviation is not concentrated and the surrounding characteristic points are irregular.
And for one of the characteristic points in the image, ordering the characteristic coefficients of the line distribution of all the intersecting points in the point set of the characteristic points from small to large to form a characteristic intersecting line sequence of the characteristic point. And obtaining the abnormal score of each intersection point in the sequence by adopting an LOF abnormal detection algorithm on the characteristic intersection line sequence of the characteristic point.
Since the characteristic points of the digital person are generally distributed more intensively, if the point set in which the characteristic points occur has a characteristic coefficient of the distribution of the connecting lines of the abnormal intersecting points, the more likely that the abnormal points or the value points are contained in the point set of the characteristic points, two extreme cases exist. Because if the point is an abnormal point, the point may be far away from the position of the feature point, and is an orphaned point, and the greater the abnormal score of all the points; also, if the point is a value point, the point may contain a key position in the digital human morphology, and if such a position is ignored, the subsequent model generation result is inaccurate, and a large deviation occurs from the actual digital human model. Therefore, in evaluating feature points of a digital person, these abnormal points need to be paid attention to.
Screening out the characteristic points of the line segments passing by the intersection points with the point concentration abnormality score of each characteristic point in the image, taking the abnormality score as the abnormality index of the characteristic point, accumulating the abnormality indexes of all the same characteristic points, constructing the abnormality index of each characteristic point in the image, and if the characteristic point does not have the intersection point of the line segments with the abnormality score of more than 1, setting the abnormality index of the characteristic point to be 0. Only feature points with an abnormality index other than 0 are analyzed below.
And clustering the characteristic points of each image according to the abnormality indexes of the characteristic points to obtain characteristic clustering clusters.
And (3) respectively carrying out similarity analysis on the ith feature cluster in the image q and all the feature clusters in other images, if the number of the feature clusters similar to the ith feature cluster is large, indicating that the feature points in the feature cluster are valuable points, otherwise, indicating that the feature points are non-valuable points. Thereby constructing the value quantity of the feature cluster, and the expression is:
in the method, in the process of the invention,value amount of i-th feature cluster representing image q, +.>The function of the minimum value is represented by,represents DTW distance, +.>An ith feature cluster representing image q, < ->And representing the j-th feature cluster of the image p, wherein the value range of p is 1 to the number of images, and the value range of j is the number of feature clusters in the image p.
By calculating the minimum DTW distance between the ith feature cluster of the image q and all feature clusters in the remaining image, it is used to characterize whether the feature cluster is a valuable outlier cluster, wherein,the larger the i-th feature cluster representing the image q is, the more valuable it is.
And taking the feature points in the feature cluster where the value quantity is more than 0.5 after normalization as value points, and taking the feature points in the feature cluster where the value quantity is less than 0.5 as non-value points.
Step S003: and selecting a first characteristic point of each image, and completing the generation of the digital human 3D model according to the first characteristic point of each image.
And taking the value point and all the characteristic points with the abnormality indexes of 0 in the steps as the first characteristic point of the image. And taking the first characteristic point in each image obtained by screening as point cloud data. And processing the point cloud data through a self-adaptive convolution and spatial variation network by adopting a PointCNN deep learning architecture, so as to realize characteristic learning and modeling of the point cloud and generate a 3D model of the digital person. The flow of the generation of the digital human 3D model is shown in fig. 2. It should be noted that, pointCNN deep learning is a known technique, and will not be described herein.
Thus, a 3D model of the digital person is obtained.
Based on the same inventive concept as the above method, the embodiment of the invention further provides a system for generating a 3D model of a digital person, which comprises a memory, a processor and a computer program stored in the memory and running on the processor, wherein the processor executes the computer program to realize the steps of any one of the above methods for generating the 3D model of the digital person.
In summary, the embodiment of the invention mainly combines the distribution side and the deviation of the feature points of the human body image and the surrounding feature points to distinguish the value quantity of the feature points in each image, and simultaneously combines the similarity of clustering among images to further disclose the screening value information of the feature points, screen out the abnormal and non-valuable feature points in the point cloud data, only keep the related key points of the digital person, not only ensure that the generation of the digital person 3D model is more accurate, but also ensure the generation quality of the digital person 3D model.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.
The foregoing description of the preferred embodiments of the present invention is not intended to be limiting, but rather, any modifications, equivalents, improvements, etc. that fall within the principles of the present invention are intended to be included within the scope of the present invention.

Claims (6)

1. A method for generating a 3D model of a digital person, the method comprising the steps of:
acquiring a human gray level image;
acquiring characteristic points of each human gray image by adopting a corner detection algorithm; acquiring a point set of each characteristic point according to the intersecting points of the connecting lines between the characteristic points; obtaining the lateral coefficient of each intersection point according to the left-right distribution difference of the line segment where each intersection point in the feature point set is located; obtaining the deflection coefficient of each intersection point according to the line segment distribution of each intersection point in the feature point set; combining the deflection coefficients of the intersecting points to obtain the connecting line distribution characteristic coefficients of the intersecting points; acquiring abnormal scores of all the intersecting points according to the connection line distribution characteristics of all the intersecting points in the point set of each characteristic point; acquiring an abnormality index of each characteristic point according to the abnormality score of each intersection point; clustering according to the abnormality indexes of the feature points by adopting a clustering algorithm to obtain feature clusters; acquiring value points according to the similarity of feature clusters among different images; acquiring a first feature point according to the value point and the abnormality index of the feature point; generating a 3D model of the digital person by adopting PointCNN deep learning according to the first feature points;
obtaining the lateral coefficient of each intersection point according to the left-right distribution difference of the line segment where each intersection point in the feature point set is located, specifically:
for each intersection in the feature point set;
calculating the average value of the absolute value of the difference between the lengths of the left line segment and the right line segment of all line segments passing through the intersection point; taking the mean value as an offset coefficient of each intersection point;
the obtaining the deflection coefficient of each intersection point according to the line segment distribution of each intersection point in the feature point set comprises the following steps:
for each intersection in the feature point set;
acquiring the slope of a line segment passing through the intersection point, and calculating a corresponding angle by adopting an arctangent function; taking the difference value between the corresponding angles of the slopes of the adjacent line segments passing through the intersection points as an included angle; calculating the average value of the included angles of all adjacent line segments passing through the intersection point; wherein the corresponding angle of the (i+1) th line segment is larger than the corresponding angle of the (i) th line segment;
when the absolute value of the included angle is smaller than the average value, taking the sum of the opposite number of the included angle and the average value as the deviation difference of each line segment passing through the intersection point;
when the absolute value of the included angle is larger than or equal to the average value, the included angle is used as the deviation difference of all line segments passing through the intersection point;
taking the sum of deviation differences of all line segments passing through the intersection point as an index of an exponential function based on a natural constant; taking the calculation result of the exponential function as a deflection coefficient of each intersection point;
the obtaining the connecting line distribution characteristic coefficient of each intersection point by combining the deflection coefficient of each intersection point comprises the following steps:
calculating the product of the deflection coefficient and the deflection coefficient of each intersection point; acquiring the number of all line segments passing through each intersection point; taking the ratio of the product to the number as a connecting line distribution characteristic coefficient of each intersection point;
the value point is obtained according to the similarity of the feature clusters among different images, and the specific steps comprise:
clustering clusters for each feature in the image;
obtaining the minimum value of the DTW distance between each cluster and all clusters in other images; calculating the sum of the minimum value and 1; taking the reciprocal of the sum value as the value quantity of each characteristic cluster;
and taking the feature points in the feature cluster with the value quantity being more than 0.5 after normalization as the value points.
2. The method for generating a 3D model of a digital person according to claim 1, wherein the acquiring each feature point set from the intersection point of the connecting lines between the feature points comprises:
for each characteristic point of the human gray image;
and connecting each characteristic point with n nearest characteristic points, and taking an intersection point generated by the connection as each characteristic point set, wherein n is a preset value.
3. The method for generating a 3D model of a digital person according to claim 1, wherein the obtaining the anomaly score of each intersection point according to the line distribution characteristics of all the intersection points in the point set of each feature point specifically comprises:
for each feature point;
arranging the intersecting points in the point set of the characteristic points from small to large according to the distribution characteristics of the connecting lines to obtain a characteristic intersecting line sequence;
and taking the characteristic intersection line sequence as input of an LOF abnormality detection algorithm, wherein the output of the LOF abnormality detection algorithm is an abnormality score of each intersection point in the characteristic intersection line sequence.
4. The method for generating a 3D model of a digital person according to claim 1, wherein the obtaining the abnormality index of each feature point according to the abnormality score of each intersection point specifically comprises:
for each feature point;
screening the intersection points with the point concentration abnormality score of the feature points being greater than 1, taking the abnormality score of the screened intersection points as the element of the abnormality index sequence of the feature points corresponding to the line segment where the intersection points are located, and taking the sum of the abnormality index sequence elements of each feature point as the abnormality index of each feature point;
if the point set of the feature points has no intersection point with an abnormality score greater than 1, 0 is used as an abnormality index of the feature points.
5. The method for generating a 3D model of a digital person according to claim 1, wherein the first feature points are feature points with a value point and an abnormality index of 0.
6. A 3D model generation system for a digital person, comprising a memory, a processor and a computer program stored in the memory and running on the processor, characterized in that the processor implements the method according to any of claims 1-5 when executing the computer program.
CN202410085534.2A 2024-01-22 2024-01-22 Method and system for generating 3D model of digital person Active CN117611752B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410085534.2A CN117611752B (en) 2024-01-22 2024-01-22 Method and system for generating 3D model of digital person

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410085534.2A CN117611752B (en) 2024-01-22 2024-01-22 Method and system for generating 3D model of digital person

Publications (2)

Publication Number Publication Date
CN117611752A CN117611752A (en) 2024-02-27
CN117611752B true CN117611752B (en) 2024-04-02

Family

ID=89958231

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410085534.2A Active CN117611752B (en) 2024-01-22 2024-01-22 Method and system for generating 3D model of digital person

Country Status (1)

Country Link
CN (1) CN117611752B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101383001A (en) * 2008-10-17 2009-03-11 中山大学 Quick and precise front human face discriminating method
WO2019100933A1 (en) * 2017-11-21 2019-05-31 蒋晶 Method, device and system for three-dimensional measurement
CN113052880A (en) * 2021-03-19 2021-06-29 南京天巡遥感技术研究院有限公司 SFM sparse reconstruction method, system and application
CN113221861A (en) * 2021-07-08 2021-08-06 中移(上海)信息通信科技有限公司 Multi-lane line detection method, device and detection equipment
CN114202587A (en) * 2021-11-12 2022-03-18 天津大学 Visual feature extraction method based on shipborne monocular camera
CN114332366A (en) * 2021-12-24 2022-04-12 西运才 Digital city single house point cloud facade 3D feature extraction method
CN114757981A (en) * 2022-03-04 2022-07-15 杭州隐捷适生物科技有限公司 Method and system for constructing occlusion based on tooth surface feature points
CN114782499A (en) * 2022-04-28 2022-07-22 杭州电子科技大学 Image static area extraction method and device based on optical flow and view geometric constraint
CN115937086A (en) * 2022-10-11 2023-04-07 浙江静远电力实业有限公司 Ultrahigh voltage transmission line defect detection method based on unmanned aerial vehicle image recognition technology
CN116167983A (en) * 2023-01-30 2023-05-26 成都唐源电气股份有限公司 Rail web alignment method, system and terminal based on minimum DTW distance
CN116244356A (en) * 2023-03-24 2023-06-09 中国科学技术大学先进技术研究院 Abnormal track detection method and device, electronic equipment and storage medium
CN116386118A (en) * 2023-04-17 2023-07-04 广州番禺职业技术学院 Drama matching cosmetic system and method based on human image recognition
CN117313957A (en) * 2023-11-28 2023-12-29 威海华创软件有限公司 Intelligent prediction method for production flow task amount based on big data analysis
CN117372647A (en) * 2023-10-26 2024-01-09 天宫开物(深圳)科技有限公司 Rapid construction method and system of three-dimensional model for building

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274943B (en) * 2020-01-19 2023-06-23 深圳市商汤科技有限公司 Detection method, detection device, electronic equipment and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101383001A (en) * 2008-10-17 2009-03-11 中山大学 Quick and precise front human face discriminating method
WO2019100933A1 (en) * 2017-11-21 2019-05-31 蒋晶 Method, device and system for three-dimensional measurement
CN113052880A (en) * 2021-03-19 2021-06-29 南京天巡遥感技术研究院有限公司 SFM sparse reconstruction method, system and application
CN113221861A (en) * 2021-07-08 2021-08-06 中移(上海)信息通信科技有限公司 Multi-lane line detection method, device and detection equipment
CN114202587A (en) * 2021-11-12 2022-03-18 天津大学 Visual feature extraction method based on shipborne monocular camera
CN114332366A (en) * 2021-12-24 2022-04-12 西运才 Digital city single house point cloud facade 3D feature extraction method
CN114757981A (en) * 2022-03-04 2022-07-15 杭州隐捷适生物科技有限公司 Method and system for constructing occlusion based on tooth surface feature points
CN114782499A (en) * 2022-04-28 2022-07-22 杭州电子科技大学 Image static area extraction method and device based on optical flow and view geometric constraint
CN115937086A (en) * 2022-10-11 2023-04-07 浙江静远电力实业有限公司 Ultrahigh voltage transmission line defect detection method based on unmanned aerial vehicle image recognition technology
CN116167983A (en) * 2023-01-30 2023-05-26 成都唐源电气股份有限公司 Rail web alignment method, system and terminal based on minimum DTW distance
CN116244356A (en) * 2023-03-24 2023-06-09 中国科学技术大学先进技术研究院 Abnormal track detection method and device, electronic equipment and storage medium
CN116386118A (en) * 2023-04-17 2023-07-04 广州番禺职业技术学院 Drama matching cosmetic system and method based on human image recognition
CN117372647A (en) * 2023-10-26 2024-01-09 天宫开物(深圳)科技有限公司 Rapid construction method and system of three-dimensional model for building
CN117313957A (en) * 2023-11-28 2023-12-29 威海华创软件有限公司 Intelligent prediction method for production flow task amount based on big data analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于LOFC时间窗分割算法的航迹聚类研究;王莉莉;彭勃;;南京航空航天大学学报;20181015(05);全文 *
基于三维点云模型的特征线提取算法;刘倩;耿国华;周明全;赵璐璐;李姬俊男;;计算机应用研究;20130315(03);全文 *

Also Published As

Publication number Publication date
CN117611752A (en) 2024-02-27

Similar Documents

Publication Publication Date Title
CN110148120B (en) Intelligent disease identification method and system based on CNN and transfer learning
CN113269237B (en) Assembly change detection method, device and medium based on attention mechanism
CN109684969B (en) Gaze position estimation method, computer device, and storage medium
CN108280858B (en) Linear global camera motion parameter estimation method in multi-view reconstruction
CN107292299B (en) Side face recognition methods based on kernel specification correlation analysis
Messai et al. Adaboost neural network and cyclopean view for no-reference stereoscopic image quality assessment
JP7310252B2 (en) MOVIE GENERATOR, MOVIE GENERATION METHOD, PROGRAM, STORAGE MEDIUM
CN114429555A (en) Image density matching method, system, equipment and storage medium from coarse to fine
Memisevic et al. Stereopsis via deep learning
CN111862278B (en) Animation obtaining method and device, electronic equipment and storage medium
Xi et al. Anti-distractor active object tracking in 3D environments
CN108537887A (en) Sketch based on 3D printing and model library 3-D view matching process
CN111199245A (en) Rape pest identification method
CN113808277A (en) Image processing method and related device
CN107194364B (en) Huffman-L BP multi-pose face recognition method based on divide and conquer strategy
Lee et al. From human pose similarity metric to 3D human pose estimator: Temporal propagating LSTM networks
CN116862955A (en) Three-dimensional registration method, system and equipment for plant images
CN114494594A (en) Astronaut operating equipment state identification method based on deep learning
Han et al. Cultural and creative product design and image recognition based on the convolutional neural network model
CN117611752B (en) Method and system for generating 3D model of digital person
CN113139967A (en) Point cloud instance segmentation method, related system and storage medium
CN112329662A (en) Multi-view saliency estimation method based on unsupervised learning
CN111428555A (en) Joint-divided hand posture estimation method
US20220180548A1 (en) Method and apparatus with object pose estimation
Liu et al. Geometrized transformer for self-supervised homography estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant