CN117475091B - High-precision 3D model generation method and system - Google Patents

High-precision 3D model generation method and system Download PDF

Info

Publication number
CN117475091B
CN117475091B CN202311815618.9A CN202311815618A CN117475091B CN 117475091 B CN117475091 B CN 117475091B CN 202311815618 A CN202311815618 A CN 202311815618A CN 117475091 B CN117475091 B CN 117475091B
Authority
CN
China
Prior art keywords
image
model
modeling
training
training image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311815618.9A
Other languages
Chinese (zh)
Other versions
CN117475091A (en
Inventor
陈奕
李伟
朱骥明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Time Coordinate Technology Co ltd
Original Assignee
Zhejiang Time Coordinate Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Time Coordinate Technology Co ltd filed Critical Zhejiang Time Coordinate Technology Co ltd
Priority to CN202311815618.9A priority Critical patent/CN117475091B/en
Publication of CN117475091A publication Critical patent/CN117475091A/en
Application granted granted Critical
Publication of CN117475091B publication Critical patent/CN117475091B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0499Feedforward networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a high-precision 3D model generation method and a system, comprising the following steps: collecting a training image set; marking the training image set; acquiring characteristic parameters of a training image; training the fuzzy region classification model through the marked training image set and the corresponding characteristic parameters; collecting a plurality of modeling images of a target to be modeled; acquiring characteristic parameters of a plurality of modeling images; inputting a plurality of modeling images and corresponding characteristic parameters thereof into a trained fuzzy region classification model to obtain a fuzzy degree marking result; performing main body segmentation on the marked modeling images; and establishing a three-dimensional digital model through the segmented modeling image and the ambiguity marks. The high-precision 3D model generation method and system provided by the invention can identify the fuzzy value of each region of the image and the main body part in the image, so that the participation degree of different regions of each image in modeling can be determined by choosing according to the fuzzy value, and the accuracy of model establishment is improved.

Description

High-precision 3D model generation method and system
Technical Field
The invention belongs to the technical field of three-dimensional digital model generation, and particularly relates to a high-precision 3D model generation method and system.
Background
In the creation of modern movie and television shows, a visual effect (simply referred to as visual effect production) created by using a computer is a very important component. The production of digital models is one of the basic works of visual effect production. Image-based scan models are an important method for digital modeling.
The existing image-based scanning model technology mainly comprises the steps of shooting a large number of pictures of an object by 360 degrees through a digital camera, comparing the pictures in software, searching characteristic point elements in the pictures, tracking position data of the characteristic points among the pictures, calculating according to the characteristic points to generate point clouds of the object in a three-dimensional space, and finally generating a grid model based on the point clouds.
Image-based scan model methods the effect of generating a model is directly related to the quality of the image used. The following problems exist in the pictures taken directly in reality using a digital camera:
in order to obtain high image quality, the full-frame camera is matched with the large aperture lens to shoot the picture, so that a shallow depth of field effect is easily generated, and partial pixels on the image main body are blurred and blurred, so that the recognition of the characteristic points of the image main body is negatively influenced.
The shot image picture content contains a large amount of environment information of non-main objects, so that the recognition of the main objects of the model is affected, and the generated model is accompanied with an environment part, which brings additional calculation amount and subsequent work of cleaning the environment model.
Disclosure of Invention
The invention provides a high-precision 3D model generation method and a system for solving the technical problems, which concretely adopt the following technical scheme:
a high-precision 3D model generation method comprises the following steps:
collecting a training image set, wherein the image set comprises a plurality of training images;
marking each training image in the training image set;
acquiring characteristic parameters of each training image in the training image set;
training the fuzzy region classification model through the marked training image set and the corresponding characteristic parameters;
collecting a plurality of modeling images of a target to be modeled;
acquiring characteristic parameters of a plurality of modeling images;
inputting a plurality of modeling images and corresponding characteristic parameters into a trained fuzzy region classification model to obtain a fuzzy degree marking result;
performing main body segmentation on the marked modeling images;
and establishing a three-dimensional digital model through the segmented modeling image and the ambiguity marks, and determining participation degrees of different areas of the modeling image according to the ambiguity marks in the process of establishing the three-dimensional digital model.
Further, the specific method for acquiring the characteristic parameters of each training image in the training image set comprises the following steps:
calculating singular value vectors of the training images;
performing cosine transform on the training image to obtain a cosine transform non-zero coefficient number of the training image;
and taking the singular value vector and the cosine transform non-zero coefficient number of the training image as the characteristic parameters.
Further, the fuzzy region classification model is a BP neural network model.
Further, the specific method for performing main body segmentation on the marked modeling images comprises the following steps:
and inputting the modeling image of the target to be modeled and the key words of the main body model into the SAM model for main body segmentation.
Further, before the obtaining of the characteristic parameters of the plurality of modeling images, the high-precision 3D model generating method further includes:
preprocessing the modeling image.
A high-precision 3D model generation system, comprising:
the image acquisition module is used for acquiring a training image set, wherein the image set comprises a plurality of training images;
the image marking module is used for marking each training image in the training image set;
the feature acquisition module is used for acquiring the feature parameters of each training image in the training image set;
the fuzzy recognition module comprises a fuzzy region classification model, and trains the fuzzy region classification model through the marked training image set and the corresponding characteristic parameters;
acquiring a plurality of modeling images of a target to be modeled by the image acquisition module, acquiring characteristic parameters of the modeling images by the characteristic acquisition module, inputting the modeling images and the corresponding characteristic parameters into the fuzzy recognition module, and processing the fuzzy recognition module by the trained fuzzy region classification model to obtain a fuzzy degree marking result;
the main body segmentation module is used for carrying out main body segmentation on the marked modeling images;
the model generation module is used for establishing a three-dimensional digital model through the segmented modeling image and the ambiguity marks, and determining participation degrees of different areas of the modeling image according to the ambiguity marks in the process of establishing the three-dimensional digital model.
Further, the specific method for acquiring the characteristic parameters of each training image in the training image set by the characteristic acquisition module is as follows:
calculating singular value vectors of the training images;
performing cosine transform on the training image to obtain a cosine transform non-zero coefficient number of the training image;
and taking the singular value vector and the cosine transform non-zero coefficient number of the training image as the characteristic parameters.
Further, the fuzzy region classification model is a BP neural network model.
Further, the main body segmentation module comprises a SAM model, a modeling image of a target to be modeled and main body model keywords are input into the main body segmentation module, and the main body segmentation module performs main body segmentation on the modeling image through the SAM model.
Further, the high-precision 3D model generation system further includes:
and the image processing module is used for preprocessing the modeling image.
The high-precision 3D model generation method and system provided by the invention have the beneficial effects that the fuzzy value of each region of the image and the main body part in the image can be identified, so that the participation degree of different regions of each image in modeling can be determined by choosing and judging according to the fuzzy value, and the accuracy of model establishment is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a schematic diagram of a high-precision 3D model generation method of the present invention;
FIG. 2 is a schematic diagram of a BP neural network model of the present invention;
FIG. 3 is a schematic diagram of a high-precision 3D model generation system of the present invention.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present application and are not to be construed as limiting the present application.
Fig. 1 shows a high-precision 3D model generating method of the present application, which includes the following steps: s1: a training image set is acquired, the image set comprising a plurality of training images. S2: each training image in the training image set is labeled. S3: the characteristic parameters of each training image in the training image set are acquired. S4: and training the fuzzy region classification model through the marked training image set and the corresponding characteristic parameters. S5: and acquiring a plurality of modeling images of the object to be modeled. S6: and acquiring characteristic parameters of a plurality of modeling images. S7: and inputting the modeling images and the corresponding characteristic parameters into a trained fuzzy region classification model to obtain a fuzzy degree marking result. S8: and carrying out main body segmentation on the marked modeling images. S9: and establishing a three-dimensional digital model through the segmented modeling image and the ambiguity marks, and determining participation degrees of different areas of the modeling image according to the ambiguity marks in the process of establishing the three-dimensional digital model. According to the high-precision 3D model generation method, the fuzzy values of all the areas of the image and the main body part in the image can be identified, so that the participation degree of different areas of each image in modeling can be determined according to the fuzzy values, and the accuracy of model establishment is improved. The above steps are specifically described below.
For step S1: a training image set is acquired, the image set comprising a plurality of training images.
Firstly, a batch of training images for training are collected to form a training image set, and the training image set is used as a learning sample.
For step S2: each training image in the training image set is labeled.
For each training image, it is first marked. In the present application, the training image user trains the fuzzy region classification model mentioned later, and thus, each training image is subjected to the ambiguity marking.
For step S3: the characteristic parameters of each training image in the training image set are acquired.
In the embodiment of the application, the specific method for acquiring the characteristic parameter of each training image in the training image set is as follows:
singular value vectors of the training images are calculated.
And performing cosine transform on the training image to obtain cosine transform non-zero coefficients of the training image.
And taking the singular value vector and the cosine transform non-zero coefficient number of the training image as characteristic parameters.
Specifically, for an image I, its singular value decomposition can be expressed as:
where U and V are orthogonal matrices and,is a singular value matrix of diagonal elements composed of singular values arranged from large to small, and Ei is a feature image. Therefore, one image I can be seen as the superposition of n characteristic images with singular values as weights, the small singular values correspond to the small-scale information description image detail information of the image, and the large singular values correspond to the large-scale shape structure characteristics of the image.
For blurred images, not only is information lost in a small scale, but also there are different degrees of loss in different scales. We describe the degree of blurring of an image block by a singular value vector of all singular values of the image block, i.e. Since the singular values corresponding to the image blocks of the blurred region are smaller than those of the clear region. The blur of the image can be characterized by the singular value vector Ms.
Secondly, after the image is subjected to cosine transform (DCT), the cosine transform coefficient may reflect the frequency domain information distribution condition of an image, and for an image I (m×n), the discrete cosine transform is:
wherein,
i (M, N) is input pixel data, u=0, 1, 2..m-1, v=0, 1, 2..n-1. When u, v increases, the frequency of the corresponding cosine function increases, and the resulting coefficients can be considered as projections of the original image signal onto the cosine function with increasing frequency. For the fuzzy area, the high-frequency information is less, and after cosine transformation is carried out, the high-frequency coefficient of the fuzzy area is more than 0 numbers, so that the change of the numbers of the non-zero coefficients can reflect the fuzzy degree of the image, and the cosine transformation non-zero coefficients Me are selected as another characteristic parameter for representing the fuzzy condition of the image.
For step S4: and training the fuzzy region classification model through the marked training image set and the corresponding characteristic parameters.
In the present application, the fuzzy region classification model is a BP neural network model. Specifically, as shown in fig. 2, a three-layer BP neural network model is provided. Wherein the input layer has 9 neurons corresponding to 8 eigenvalues of the 8 x 8 singular value vector Ms and a cosine transform (DCT) non-zero coefficient number Me. The output layer has 1 neuron, i.e. the pixel block ambiguity value. The hidden layer contains 15 neurons. The maximum iteration number is 50000, the objective function is a mean square error function, the minimum mean square error is set to be 0.01, the activation function is a Sigmoid function, and the learning step length is 0.1. All feature elements are normalized to-1 to 1 and then input, the label of the clear image block is 0, and the blurred image is marked as 1.
During training, the singular value vector Ms of the sample image and the DCT non-zero coefficient number Me are extracted and used as input layer signals (x 1, x2,..x 9), and are finally transmitted to an output layer through hidden layer-by-layer conversion to obtain a fuzzy value y1, and the y1 is compared with a label value. If the error does not reach the set point, the forward propagation is switched to the reverse propagation of the error. The error signal is transmitted to the input layer through the output layer and then to the hidden layer, the weight and the deviation of each layer of neurons are updated through error back transmission, then the next round of forward transmission is carried out, and the cycle is repeated until the expected iteration times are reached, and the model training is completed.
Specifically, in the present application, some images with depth blur are selected as training samples, and are disassembled into 8×8 pixel image blocks. And respectively calculating a singular value vector Ms and a DCT non-zero coefficient number Me of each image block, simultaneously labeling an ambiguity label (0-1) for each image block to obtain an input vector vector= (label; ms; ms), and then transmitting vectors of all the image blocks to a BP neural network to train the BP neural network.
For step S5: and acquiring a plurality of modeling images of the object to be modeled.
For a target to be modeled which needs to be modeled, a large number of high-precision photos of the target can be shot at each angle of 360 degrees.
Preferably, the photograph is pre-processed after it is taken, including but not limited to correcting lens distortion, unifying color temperature, increasing contrast, etc.
For step S6: and acquiring characteristic parameters of a plurality of modeling images.
And obtaining singular value vectors and cosine transform non-zero coefficients of each modeling image through the same process.
For step S7: and inputting the modeling images and the corresponding characteristic parameters into a trained fuzzy region classification model to obtain a fuzzy degree marking result.
And inputting a plurality of modeling images and corresponding characteristic parameters thereof into the trained BP neural network classification model to finish the identification and marking of the ambiguity of each region.
Specifically, for an image of m×n resolution, we select 2×2 image blocks as the basic detection units, and measure 8×8 region blur values centered around them as the blur values of the image blocks. For each block 8 x 8 region, its input vector is 8 eigenvalues of the singular value vector Ms of the image block and the DCT non-zero coefficient Me. The output vector is the blur value yl (0-1) of the image block. And sequentially analyzing each 2 x2 image block of the image from left to right and from top to bottom, finally obtaining the estimated fuzzy value of the whole m multiplied by n image in each region, and presenting the estimated fuzzy value in a black-and-white gray level image.
For step S8: and carrying out main body segmentation on the marked modeling images.
In the embodiment of the application, the specific method for dividing the main body of the marked modeling images is as follows:
inputting the modeling image of the object to be modeled and the key words of the main model into the image semantic recognition model to carry out main body segmentation, thus recognizing a main body part from the modeling image of each object to be modeled, segmenting the main body part, and distinguishing the main body part from a background part. For example, if the object to be modeled is a vase, a modeling image and a vase are input, and then the vase part of the main body in the image is automatically segmented by the image semantic recognition model.
In the present application, the image semantic recognition model adopts an existing SAM (Segment Anything model) model. Specifically, the SAM model mainly comprises three parts: an image encoder, a hint word encoder, and a mask decoder. Therein, the image encoder uses MAE pre-trained VisionTransformer (ViT). The encoder runs once for each image before the hint word is encoded. The cue word encoder processes a cue word in text form using the text encoder of the existing CLIP model as a position encoder. The mask decoder uses a self-attention and cross-attention cue and image bi-directional transducer decoder. The SAM model is an existing image semantic recognition model, and the structure and principle thereof are not described herein. And inputting the image to be segmented and the prompt word of the image main body into a SAM model, wherein the SAM model can output the mask of the main body in the image.
For step S9: and establishing a three-dimensional digital model through the segmented modeling image and the ambiguity marks, and determining participation degrees of different areas of the modeling image according to the ambiguity marks in the process of establishing the three-dimensional digital model.
And carrying out digital modeling on the main body part segmented by each image, and then carrying out choosing and choosing according to the fuzzy degree to determine the participation degree. Specifically, the background portion is not selected, the weight is 0, and all are excluded from calculation. The main body portion is given a weight of 0.1-1 according to the definition. The weight of the clear body part is 1, and the unclear part is assigned 0.1-0.9 according to the definition. And finally, carrying out point cloud calculation by using the weight given by the identified image. After the model main body point cloud is generated, according to the distribution of the point cloud in the three-dimensional space, a three-dimensional model grid can be calculated and generated, and finally, the three-dimensional model grid is output in an obj format.
Fig. 3 shows a high-precision 3D model generating system according to the present application, which is configured to implement the foregoing high-precision 3D model generating method. The high-precision 3D model generation system comprises: the device comprises an image acquisition module, an image marking module, a characteristic acquisition module, a fuzzy recognition module, a main body segmentation module and a model generation module.
Specifically, the image acquisition module is used for acquiring a training image set, and the image set comprises a plurality of training images. The image marking module is used for marking each training image in the training image set. The feature acquisition module is used for acquiring the feature parameters of each training image in the training image set. The fuzzy recognition module comprises a fuzzy region classification model, and trains the fuzzy region classification model through marked training image sets and corresponding characteristic parameters. The method comprises the steps of collecting a plurality of modeling images of a target to be modeled through an image collecting module, obtaining characteristic parameters of the modeling images through a characteristic obtaining module, inputting the modeling images and the corresponding characteristic parameters into a fuzzy recognition module, and processing the modeling images through a trained fuzzy region classification model by the fuzzy recognition module to obtain a fuzzy degree marking result. The main body segmentation module is used for carrying out main body segmentation on the marked modeling images. The model generation module is used for establishing a three-dimensional digital model through the segmented modeling image and the ambiguity marks, and determining participation degrees of different areas of the modeling image according to the ambiguity marks in the process of establishing the three-dimensional digital model.
As a preferred embodiment, the specific method for acquiring the characteristic parameter of each training image in the training image set by the characteristic acquisition module is as follows:
singular value vectors of the training images are calculated.
And performing cosine transform on the training image to obtain cosine transform non-zero coefficients of the training image.
And taking the singular value vector and the cosine transform non-zero coefficient number of the training image as characteristic parameters.
As a preferred embodiment, the fuzzy region classification model is a BP neural network model.
As a preferred embodiment, the subject segmentation module includes a SAM model, a modeling image of a target to be modeled and a subject model keyword are input into the subject segmentation module, and the subject segmentation module subjects the modeling image to subject segmentation by the SAM model.
As a preferred embodiment, the high-precision 3D model generation system further comprises an image processing module for preprocessing the modeling image.
The specific content of each module of the high-precision 3D model generating system refers to the aforementioned high-precision 3D model generating system, and will not be described herein.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be appreciated by persons skilled in the art that the above embodiments are not intended to limit the invention in any way, and that all technical solutions obtained by means of equivalent substitutions or equivalent transformations fall within the scope of the invention.

Claims (6)

1. The high-precision 3D model generation method is characterized by comprising the following steps of:
collecting a training image set, wherein the image set comprises a plurality of training images;
marking each training image in the training image set;
acquiring characteristic parameters of each training image in the training image set;
training the fuzzy region classification model through the marked training image set and the corresponding characteristic parameters;
collecting a plurality of modeling images of a target to be modeled;
acquiring characteristic parameters of a plurality of modeling images;
inputting a plurality of modeling images and corresponding characteristic parameters into a trained fuzzy region classification model to obtain a fuzzy degree marking result;
performing main body segmentation on the marked modeling images;
establishing a three-dimensional digital model through the segmented modeling image and the fuzzy degree mark, wherein the background part is not selected, the weight is 0, the weight of the main body part is assigned to 0.1-1 according to definition, the weight of the clear main body part is 1, the non-clear part is assigned to 0.1-0.9 according to definition, the participation degree of different areas of the modeling image is determined according to the fuzzy degree mark in the process of establishing the three-dimensional digital model, and point cloud calculation is performed by using the weight assigned by matching the identified image;
the specific method for acquiring the characteristic parameters of each training image in the training image set comprises the following steps:
calculating singular value vectors of the training images;
performing cosine transform on the training image to obtain a cosine transform non-zero coefficient number of the training image;
taking a singular value vector and a cosine transform non-zero coefficient number of the training image as the characteristic parameters;
the specific method for dividing the main body of the marked modeling images comprises the following steps:
and inputting the modeling image of the target to be modeled and the key words of the main body model into the SAM model for main body segmentation.
2. The method for generating a high-precision 3D model according to claim 1, wherein,
the fuzzy region classification model is a BP neural network model.
3. The method for generating a high-precision 3D model according to claim 1, wherein,
before the obtaining of the characteristic parameters of the modeling images, the high-precision 3D model generating method further comprises the following steps:
preprocessing the modeling image.
4. A high-precision 3D model generation system, comprising:
the image acquisition module is used for acquiring a training image set, wherein the image set comprises a plurality of training images;
the image marking module is used for marking each training image in the training image set;
the feature acquisition module is used for acquiring the feature parameters of each training image in the training image set;
the fuzzy recognition module comprises a fuzzy region classification model, and trains the fuzzy region classification model through the marked training image set and the corresponding characteristic parameters;
acquiring a plurality of modeling images of a target to be modeled by the image acquisition module, acquiring characteristic parameters of the modeling images by the characteristic acquisition module, inputting the modeling images and the corresponding characteristic parameters into the fuzzy recognition module, and processing the fuzzy recognition module by the trained fuzzy region classification model to obtain a fuzzy degree marking result;
the main body segmentation module is used for carrying out main body segmentation on the marked modeling images;
the model generation module is used for establishing a three-dimensional digital model through the segmented modeling image and the fuzzy degree mark, wherein the background part is not selected, the weight is 0, the weight of the main body part is 0.1-1 according to definition, the weight of the main body part is 1, the non-definition part is 0.1-0.9 according to definition assignment, the participation degree of different areas of the modeling image is determined according to the fuzzy degree mark in the process of establishing the three-dimensional digital model, and point cloud calculation is carried out by using the weight given by the matching of the identified image;
the specific method for acquiring the characteristic parameters of each training image in the training image set through the characteristic acquisition module comprises the following steps:
calculating singular value vectors of the training images;
performing cosine transform on the training image to obtain a cosine transform non-zero coefficient number of the training image;
taking a singular value vector and a cosine transform non-zero coefficient number of the training image as the characteristic parameters;
the main body segmentation module comprises a SAM model, a modeling image of a target to be modeled and main body model keywords are input into the main body segmentation module, and the main body segmentation module performs main body segmentation on the modeling image through the SAM model.
5. The high-precision 3D model generation system of claim 4, wherein,
the fuzzy region classification model is a BP neural network model.
6. The high-precision 3D model generation system of claim 4, wherein,
the high-precision 3D model generation system further comprises:
and the image processing module is used for preprocessing the modeling image.
CN202311815618.9A 2023-12-27 2023-12-27 High-precision 3D model generation method and system Active CN117475091B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311815618.9A CN117475091B (en) 2023-12-27 2023-12-27 High-precision 3D model generation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311815618.9A CN117475091B (en) 2023-12-27 2023-12-27 High-precision 3D model generation method and system

Publications (2)

Publication Number Publication Date
CN117475091A CN117475091A (en) 2024-01-30
CN117475091B true CN117475091B (en) 2024-03-22

Family

ID=89635100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311815618.9A Active CN117475091B (en) 2023-12-27 2023-12-27 High-precision 3D model generation method and system

Country Status (1)

Country Link
CN (1) CN117475091B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050060802A (en) * 2003-12-17 2005-06-22 한국전자통신연구원 Apparatus and method for satellite image classification based on optimization algorithm of fuzzy and evolutionary computation
CN101604376A (en) * 2008-10-11 2009-12-16 大连大学 Face identification method based on the HMM-SVM mixture model
CN105741317A (en) * 2016-01-20 2016-07-06 内蒙古科技大学 Infrared moving target detection method based on time-space domain saliency analysis and sparse representation
CN108268814A (en) * 2016-12-30 2018-07-10 广东精点数据科技股份有限公司 A kind of face identification method and device based on the fusion of global and local feature Fuzzy
CN108846800A (en) * 2018-05-30 2018-11-20 武汉大学 A kind of non-reference picture quality appraisement method of image super-resolution rebuilding
CN109035196A (en) * 2018-05-22 2018-12-18 安徽大学 Image local fuzzy detection method based on conspicuousness
CN111161217A (en) * 2019-12-10 2020-05-15 中国民航大学 Conv-LSTM multi-scale feature fusion-based fuzzy detection method
WO2021088640A1 (en) * 2019-11-06 2021-05-14 重庆邮电大学 Facial recognition technology based on heuristic gaussian cloud transformation
WO2021179471A1 (en) * 2020-03-09 2021-09-16 苏宁易购集团股份有限公司 Face blur detection method and apparatus, computer device and storage medium
WO2023082870A1 (en) * 2021-11-10 2023-05-19 腾讯科技(深圳)有限公司 Training method and apparatus for image segmentation model, image segmentation method and apparatus, and device
CN116740501A (en) * 2023-06-14 2023-09-12 城云科技(中国)有限公司 Training method and application of image blurring region restoration compensation model
CN117132973A (en) * 2023-10-27 2023-11-28 武汉大学 Method and system for reconstructing and enhancing visualization of surface environment of extraterrestrial planet
CN117291930A (en) * 2023-08-25 2023-12-26 中建三局第三建设工程有限责任公司 Three-dimensional reconstruction method and system based on target object segmentation in picture sequence

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050060802A (en) * 2003-12-17 2005-06-22 한국전자통신연구원 Apparatus and method for satellite image classification based on optimization algorithm of fuzzy and evolutionary computation
CN101604376A (en) * 2008-10-11 2009-12-16 大连大学 Face identification method based on the HMM-SVM mixture model
CN105741317A (en) * 2016-01-20 2016-07-06 内蒙古科技大学 Infrared moving target detection method based on time-space domain saliency analysis and sparse representation
CN108268814A (en) * 2016-12-30 2018-07-10 广东精点数据科技股份有限公司 A kind of face identification method and device based on the fusion of global and local feature Fuzzy
CN109035196A (en) * 2018-05-22 2018-12-18 安徽大学 Image local fuzzy detection method based on conspicuousness
CN108846800A (en) * 2018-05-30 2018-11-20 武汉大学 A kind of non-reference picture quality appraisement method of image super-resolution rebuilding
WO2021088640A1 (en) * 2019-11-06 2021-05-14 重庆邮电大学 Facial recognition technology based on heuristic gaussian cloud transformation
CN111161217A (en) * 2019-12-10 2020-05-15 中国民航大学 Conv-LSTM multi-scale feature fusion-based fuzzy detection method
WO2021179471A1 (en) * 2020-03-09 2021-09-16 苏宁易购集团股份有限公司 Face blur detection method and apparatus, computer device and storage medium
WO2023082870A1 (en) * 2021-11-10 2023-05-19 腾讯科技(深圳)有限公司 Training method and apparatus for image segmentation model, image segmentation method and apparatus, and device
CN116740501A (en) * 2023-06-14 2023-09-12 城云科技(中国)有限公司 Training method and application of image blurring region restoration compensation model
CN117291930A (en) * 2023-08-25 2023-12-26 中建三局第三建设工程有限责任公司 Three-dimensional reconstruction method and system based on target object segmentation in picture sequence
CN117132973A (en) * 2023-10-27 2023-11-28 武汉大学 Method and system for reconstructing and enhancing visualization of surface environment of extraterrestrial planet

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A robust color image watermarking algorithm based on 3D-DCT and SVD;XIONG Xiang-guang;WEL Li;XIE Gang;Computer Engineering & Science;20151231;第37卷(第6期);第1093-1100页 *
基于BP神经网络的图像局部模糊测量;黄善春;方贤勇;周健;沈峰;中国图象图形学报;20150131;第20卷(第1期);第1-2节 *

Also Published As

Publication number Publication date
CN117475091A (en) 2024-01-30

Similar Documents

Publication Publication Date Title
CN111160297B (en) Pedestrian re-identification method and device based on residual attention mechanism space-time combined model
CN108416266B (en) Method for rapidly identifying video behaviors by extracting moving object through optical flow
CN109711366B (en) Pedestrian re-identification method based on group information loss function
CN110246181B (en) Anchor point-based attitude estimation model training method, attitude estimation method and system
CN109903299B (en) Registration method and device for heterogenous remote sensing image of conditional generation countermeasure network
CN112200057A (en) Face living body detection method and device, electronic equipment and storage medium
CN107767358B (en) Method and device for determining ambiguity of object in image
CN107766864B (en) Method and device for extracting features and method and device for object recognition
CN114693661A (en) Rapid sorting method based on deep learning
CN111833360B (en) Image processing method, device, equipment and computer readable storage medium
US20240161304A1 (en) Systems and methods for processing images
CN114332639A (en) Satellite attitude vision measurement algorithm of nonlinear residual error self-attention mechanism
CN111680573B (en) Face recognition method, device, electronic equipment and storage medium
CN114626476A (en) Bird fine-grained image recognition method and device based on Transformer and component feature fusion
Jia et al. Effective meta-attention dehazing networks for vision-based outdoor industrial systems
CN116958420A (en) High-precision modeling method for three-dimensional face of digital human teacher
CN109978897B (en) Registration method and device for heterogeneous remote sensing images of multi-scale generation countermeasure network
CN112329662B (en) Multi-view saliency estimation method based on unsupervised learning
CN112070181B (en) Image stream-based cooperative detection method and device and storage medium
CN114548253A (en) Digital twin model construction system based on image recognition and dynamic matching
CN112668662B (en) Outdoor mountain forest environment target detection method based on improved YOLOv3 network
CN113762009B (en) Crowd counting method based on multi-scale feature fusion and double-attention mechanism
CN113989612A (en) Remote sensing image target detection method based on attention and generation countermeasure network
CN117133041A (en) Three-dimensional reconstruction network face recognition method, system, equipment and medium based on deep learning
CN117475091B (en) High-precision 3D model generation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant