CN115578523A - Tongue three-dimensional modeling method and system for multi-angle image fusion - Google Patents

Tongue three-dimensional modeling method and system for multi-angle image fusion Download PDF

Info

Publication number
CN115578523A
CN115578523A CN202211451768.1A CN202211451768A CN115578523A CN 115578523 A CN115578523 A CN 115578523A CN 202211451768 A CN202211451768 A CN 202211451768A CN 115578523 A CN115578523 A CN 115578523A
Authority
CN
China
Prior art keywords
preset
tongue
model
point cloud
coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211451768.1A
Other languages
Chinese (zh)
Other versions
CN115578523B (en
Inventor
张帅
赵亮
刘畅
李晓波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huiyigu Traditional Chinese Medicine Technology Tianjin Co ltd
Original Assignee
Huiyigu Traditional Chinese Medicine Technology Tianjin Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huiyigu Traditional Chinese Medicine Technology Tianjin Co ltd filed Critical Huiyigu Traditional Chinese Medicine Technology Tianjin Co ltd
Priority to CN202211451768.1A priority Critical patent/CN115578523B/en
Publication of CN115578523A publication Critical patent/CN115578523A/en
Application granted granted Critical
Publication of CN115578523B publication Critical patent/CN115578523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/001Model-based coding, e.g. wire frame
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20052Discrete cosine transform [DCT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a tongue three-dimensional modeling method and a tongue three-dimensional modeling system for multi-angle image fusion, which relate to the technical field of computer application, and the method comprises the following steps: acquiring a preset tongue through video acquisition equipment to obtain a preset tongue video; establishing a preset key frame set of a preset tongue part; constructing an intelligent registration fusion model; acquiring a point cloud data set, and analyzing an intelligent registration fusion model to obtain output information; extracting a model parameter estimation result and obtaining a preset point cloud model; obtaining basic characteristic parameters of video acquisition equipment, and performing texture mapping on the preset point cloud model based on the basic characteristic parameters to obtain a texture mapping result; and taking the texture mapping result as a three-dimensional model of the preset tongue. The problem of among the prior art visual observation tongue picture, exist to look over inefficiency, judge not look at by naked eyes is solved. The method realizes the aim of constructing the preset tongue three-dimensional model and achieves the technical effects of objectification and datamation of tongue picture information.

Description

Tongue three-dimensional modeling method and system for multi-angle image fusion
Technical Field
The invention relates to the technical field of computer application, in particular to a tongue three-dimensional modeling method and a tongue three-dimensional modeling system for multi-angle image fusion.
Background
Tongue diagnosis refers to the observation of the changes of tongue color, form, moistening dryness, etc., so as to aid disease diagnosis and disease differentiation, and belongs to the important content of inspection in inspection, auscultation, inquiry and resection of the four diagnostic methods in traditional Chinese medicine, and is also one of the most important features of the Chinese medicine diagnosis method. In the traditional Chinese medicine diagnosis process, doctors only observe tongue pictures of patients through naked eyes, then assist in generating disease decisions of the patients after subjective analysis, observe the tongues of the patients in sequence, the observation efficiency is low, the diagnosis efficiency of the doctors is further influenced, and meanwhile due to the influence of subjective factors of the doctors, the problem that the diagnosis decisions of the patients are not objective to judgment is caused, and even the reliability and the effectiveness of disease judgment are influenced. Therefore, research on intelligent acquisition of tongue information of patients by using a computer technology provides reliable and intuitive tongue picture data for doctor's diagnosis decision, and the method has important significance for improving doctor's diagnosis efficiency.
However, in the prior art, the tongue picture analysis of the patient is performed only by the naked eye observation of a doctor, so as to assist in generating the disease decision of the patient, and the technical problems of low observation efficiency, non-objective judgment and further influence on the reliability and effectiveness of the disease decision exist.
Disclosure of Invention
The invention aims to provide a tongue three-dimensional modeling method and a tongue three-dimensional modeling system for multi-angle image fusion, which are used for solving the technical problems that in the prior art, the tongue image of a patient is analyzed only through doctor visual observation, and then the disease decision of the patient is generated in an auxiliary manner, the observation efficiency is low, the judgment is not objective, and the reliability and the effectiveness of the disease decision are further influenced.
In view of the above problems, the present invention provides a tongue three-dimensional modeling method and system for multi-angle image fusion.
In a first aspect, the present invention provides a multi-angle image fused tongue three-dimensional modeling method, which is implemented by a multi-angle image fused tongue three-dimensional modeling system, wherein the method includes: acquiring a preset tongue video by acquiring the preset tongue and acquiring video information of the preset tongue through video acquisition equipment; establishing a preset key frame set of the preset tongue part based on the preset tongue part video, wherein the preset key frame set comprises a plurality of key frame images; constructing an intelligent registration fusion model by using a random sampling consistency algorithm principle; acquiring point cloud data sets of the key frame images, using the point cloud data sets as input information of the intelligent registration fusion model, and analyzing the intelligent registration fusion model to obtain output information; extracting a model parameter estimation result in the output information, and obtaining a preset point cloud model based on the model parameter estimation result; obtaining basic characteristic parameters of the video acquisition equipment, and performing texture mapping on the preset point cloud model based on the basic characteristic parameters to obtain a texture mapping result; and taking the texture mapping result as a three-dimensional model of the preset tongue.
In a second aspect, the present invention further provides a multi-angle image fused tongue three-dimensional modeling system, for performing the multi-angle image fused tongue three-dimensional modeling method according to the first aspect, wherein the system includes: the acquisition module is used for acquiring a preset tongue part and acquiring video information of the preset tongue part through video acquisition equipment to obtain a preset tongue part video; an extraction module, configured to establish a preset key frame set of the preset tongue based on the preset tongue video, where the preset key frame set includes a plurality of key frame images; the construction module is used for constructing an intelligent registration fusion model by utilizing a random sampling consistency algorithm principle; the processing module is used for obtaining point cloud data sets of the key frame images, using the point cloud data sets as input information of the intelligent registration and fusion model, and obtaining output information through analysis of the intelligent registration and fusion model; the obtaining module is used for extracting a model parameter estimation result in the output information and obtaining a preset point cloud model based on the model parameter estimation result; the mapping module is used for obtaining basic characteristic parameters of the video acquisition equipment and carrying out texture mapping on the preset point cloud model based on the basic characteristic parameters to obtain a texture mapping result; and the obtaining module is used for taking the texture mapping result as the three-dimensional model of the preset tongue part.
One or more technical schemes provided by the invention at least have the following technical effects or advantages:
acquiring a preset tongue video by acquiring the preset tongue and acquiring video information of the preset tongue through video acquisition equipment; establishing a preset key frame set of the preset tongue part based on the preset tongue part video, wherein the preset key frame set comprises a plurality of key frame images; constructing an intelligent registration fusion model by using a random sampling consistency algorithm principle; acquiring point cloud data sets of the key frame images, using the point cloud data sets as input information of the intelligent registration fusion model, and analyzing the intelligent registration fusion model to obtain output information; extracting a model parameter estimation result in the output information, and obtaining a preset point cloud model based on the model parameter estimation result; obtaining basic characteristic parameters of the video acquisition equipment, and performing texture mapping on the preset point cloud model based on the basic characteristic parameters to obtain a texture mapping result; and taking the texture mapping result as a three-dimensional model of the preset tongue. The video information acquisition is carried out on the preset tongue part through the video acquisition equipment, and the technical aim of providing a comprehensive and reliable information basis for the subsequent construction of a three-dimensional model of the preset tongue part is achieved. By intelligently analyzing the preset tongue video collected by the video collecting equipment and extracting the key frame image of the tongue in the video, the reduction of the system processing information amount is realized on the basis of ensuring the comprehensive, accurate and effective system processing data, thereby improving the efficiency of the system processing data. And further, by means of an intelligent registration fusion model, after the key frame image of the preset tongue is subjected to intelligent registration processing, parameter estimation of a three-dimensional model of the preset tongue is obtained, a preset point cloud model of the preset tongue is further constructed, and a preliminary modeling target of the preset tongue is realized. And then, mapping the texture of the key frame image of the preset tongue part to the preset point cloud model by combining with the basic characteristic parameters of the video acquisition equipment, so that the texture mapping of the preset point cloud model is realized, and finally, the three-dimensional model of the preset tongue part is obtained. The target of constructing a three-dimensional model of the preset tongue is realized by processing and analyzing the image information based on the preset tongue, and the technical effects of objectification and datamation of tongue picture information are achieved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only exemplary, and for those skilled in the art, other drawings can be obtained according to the provided drawings without inventive effort.
FIG. 1 is a schematic flow chart of a tongue three-dimensional modeling method for multi-angle image fusion according to the present invention;
FIG. 2 is a schematic flow chart of the method for tongue three-dimensional modeling with multi-angle image fusion according to the present invention, wherein a preset keyframe set is created according to the analysis result;
FIG. 3 is a schematic flow chart of a preset feature difference curve obtained by drawing according to a preset feature difference in the multi-angle image-fused tongue three-dimensional modeling method of the present invention;
FIG. 4 is a schematic flow chart of the output information obtained by the analysis of the intelligent registration fusion model in the tongue three-dimensional modeling method of multi-angle image fusion of the present invention;
FIG. 5 is a schematic structural diagram of a tongue three-dimensional modeling system for multi-angle image fusion according to the present invention.
Description of reference numerals:
the acquisition module M100, the extraction module M200, the construction module M300, the processing module M400, the obtaining module M500, and the mapping module M600, to obtain the module M700.
Detailed Description
The invention provides a tongue three-dimensional modeling method and a tongue three-dimensional modeling system for multi-angle image fusion, and solves the technical problems that in the prior art, only doctor visual observation is used for tongue picture analysis of patients, so that disease decision of the patients is generated in an auxiliary mode, the observation efficiency is low, the judgment is not objective, and the reliability and the effectiveness of the disease decision are influenced. The target of constructing a three-dimensional model of the preset tongue is realized by processing and analyzing the image information based on the preset tongue, and the technical effects of objectification and datamation of tongue picture information are achieved.
In the technical scheme of the invention, the data acquisition, storage, use, processing and the like all conform to relevant regulations of national laws and regulations.
In the following, the technical solutions in the present invention will be clearly and completely described with reference to the accompanying drawings, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments of the present invention, and it should be understood that the present invention is not limited by the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention. It should be further noted that, for the convenience of description, only some but not all of the elements associated with the present invention are shown in the drawings.
Example one
Referring to fig. 1, the present invention provides a tongue three-dimensional modeling method for multi-angle image fusion, wherein the method is applied to a tongue three-dimensional modeling system for multi-angle image fusion, the tongue three-dimensional modeling system is in communication connection with a video capture device, and the method specifically includes the following steps:
step S100: acquiring a preset tongue, and acquiring video information of the preset tongue through the video acquisition equipment to obtain a preset tongue video;
specifically, the multi-angle image fused tongue three-dimensional modeling method is applied to a multi-angle image fused tongue three-dimensional modeling system, and can be used for building a three-dimensional model of a preset tongue through a computer technology, so that the datamation and the objectification of the information of the tongue image of the preset tongue are realized, a reliable data basis is provided for a doctor to judge and analyze the tongue image, and the judgment and decision efficiency of the doctor is improved. The preset tongue refers to any tongue which needs to acquire tongue picture information through the tongue three-dimensional modeling system so as to construct a three-dimensional model. The video acquisition equipment is depth of field camera equipment and is used for acquiring multi-angle and multi-distance video information of the preset tongue part. In addition, due to the communication connection between the video acquisition equipment and the tongue three-dimensional modeling system, after the preset tongue video of the preset tongue is acquired by the video acquisition equipment, the video is transmitted to the tongue three-dimensional modeling system in real time through a communication channel. The preset tongue video is acquired through collection, so that the technical goal of providing comprehensive, reliable and sufficient digital information basis for the subsequent construction of the three-dimensional model of the preset tongue is achieved, and the technical effect of improving the construction accuracy and the practicability of the three-dimensional model is further achieved.
Step S200: establishing a preset key frame set of the preset tongue part based on the preset tongue part video, wherein the preset key frame set comprises a plurality of key frame images;
further, as shown in fig. 2, step S200 of the present invention further includes:
step S210: encoding the preset tongue video by utilizing a dynamic image expert group, and establishing an encoding unit set according to an encoding result, wherein the encoding unit set comprises a plurality of encoding units;
step S220: sequentially extracting a plurality of I frame images in the plurality of coding units, sequentially decoding the plurality of I frame images, and calculating and drawing a preset characteristic difference curve according to a decoding result;
further, as shown in fig. 3, step S220 of the present invention further includes:
step S221: sequentially extracting a first I frame image and a second I frame image in the I frame images, wherein the first I frame image and the second I frame image are adjacent frame images;
step S222: sequentially performing discrete cosine transform on the first I frame image and the second I frame image to respectively obtain a first discrete cosine transform coefficient and a second discrete cosine transform coefficient;
step S223: calculating to obtain a first characteristic value of the first I frame image based on the first discrete cosine transform coefficient, and calculating to obtain a second characteristic value of the second I frame image based on the second discrete cosine transform coefficient;
further, the invention also comprises the following steps:
step S2231: extracting a first direct current coefficient and a first alternating current coefficient in the first discrete cosine transform coefficient;
step S2232: and calculating to obtain the first characteristic value according to the first direct current coefficient and the first alternating current coefficient, wherein a calculation formula is as follows:
Figure 264831DEST_PATH_IMAGE001
step S2233: wherein the content of the first and second substances,
Figure 986799DEST_PATH_IMAGE002
it is referred to the first characteristic value that,
Figure 742266DEST_PATH_IMAGE003
it is meant that the first direct current coefficient,
Figure 334921DEST_PATH_IMAGE004
refers to said first ac coefficient, n refers to said first I frame picture,
Figure 416010DEST_PATH_IMAGE005
is referred to as the second I frame image
Figure 861160DEST_PATH_IMAGE006
A sub-block, a is an influence factor of the first direct current coefficient on the first characteristic value, and b is an influence factor of the first alternating current coefficient on the first characteristic value;
step S2234: extracting a second direct current coefficient and a second alternating current coefficient in the second discrete cosine transform coefficient;
step S2235: and calculating to obtain the second characteristic value according to the second direct current coefficient and the second alternating current coefficient, wherein a calculation formula is as follows:
Figure 420317DEST_PATH_IMAGE007
step S2236: wherein the content of the first and second substances,
Figure 133058DEST_PATH_IMAGE008
it is referred to the second characteristic value that,
Figure 119469DEST_PATH_IMAGE009
is referred to as the second direct current coefficient,
Figure 816029DEST_PATH_IMAGE010
refers to the second ac coefficient, n +1 refers to the second I frame image,
Figure 913298DEST_PATH_IMAGE011
is referred to as the second I frame image
Figure 11704DEST_PATH_IMAGE012
The sub-blocks c are influence factors of the second direct current coefficients on the second characteristic value, and d is influence factors of the second alternating current coefficients on the second characteristic value.
Step S224: subtracting the first characteristic value and the second characteristic value to obtain a preset characteristic difference value of the first I frame image and the second I frame image;
further, the invention also comprises the following steps:
step S2241: and calculating to obtain the preset characteristic difference value according to the first characteristic value and the second characteristic value, wherein the calculation formula is as follows:
Figure 404902DEST_PATH_IMAGE013
step S2242: wherein the content of the first and second substances,
Figure 323179DEST_PATH_IMAGE014
is the preset characteristic difference.
Step S225: and drawing to obtain the preset characteristic difference curve according to the preset characteristic difference.
Step S230: and denoising to obtain a denoising processing result of the preset characteristic difference curve, analyzing the denoising processing result, and establishing the preset key frame set according to the analysis result.
Specifically, after the preset tongue video is acquired and transmitted to the tongue three-dimensional modeling system in real time, the tongue three-dimensional modeling system performs intelligent processing on the preset tongue video, and then extracts and obtains a key frame image of the preset tongue in the preset tongue video, namely, the preset key frame set is obtained, so that an image data basis is provided for three-dimensional modeling of the preset tongue.
Firstly, a dynamic image expert group is utilized to carry out coding compression on the preset tongue video, and then a plurality of coding units of the preset tongue video are obtained from a coding compression result. Among them, the MPEG standard is the most widely used international video compression standard. And then, sequentially extracting the I-frame images in the plurality of coding units, and constructing to obtain the plurality of I-frame images. The coding unit of the MPEG standard comprises three image frames, namely an I frame, a B frame and a P frame. Then, based on the I-frame images, a first I-frame image and a second I-frame image are respectively obtained, where the first I-frame image refers to any one of the I-frame images, and the first I-frame image and the second I-frame image are adjacent frame images. And then, discrete cosine transform is sequentially carried out on the first I frame image and the second I frame image to respectively obtain a first discrete cosine transform coefficient and a second discrete cosine transform coefficient. That is, an image is divided into a plurality of image blocks, and each image block is sequentially transformed, so that a DCT coefficient, i.e., a discrete cosine transform coefficient, of each image block is obtained. The DCT coefficients include, among others, direct current coefficients related to the image base tone and alternating current coefficients related to the image texture, i.e., DC coefficients and AC coefficients. And then, calculating to obtain a first characteristic value of the first I frame image based on the first discrete cosine transform coefficient, and calculating to obtain a second characteristic value of the second I frame image based on the second discrete cosine transform coefficient. That is to say, first direct current coefficients and first alternating current coefficients in the first discrete cosine transform coefficients are extracted, and the first characteristic value is obtained through calculation according to the first direct current coefficients and the first alternating current coefficients, wherein a calculation formula is as follows:
Figure 224139DEST_PATH_IMAGE015
wherein the content of the first and second substances,
Figure 645893DEST_PATH_IMAGE016
it is referred to the first characteristic value that,
Figure 239685DEST_PATH_IMAGE017
it is meant that the first direct current coefficient,
Figure 379680DEST_PATH_IMAGE018
refers to said first ac coefficient, n refers to said first I frame picture,
Figure 84331DEST_PATH_IMAGE019
is referred to as the second I frame image
Figure 127635DEST_PATH_IMAGE020
And a sub-block, wherein a refers to an influence factor of the first direct current coefficient on the first characteristic value, and b refers to an influence factor of the first alternating current coefficient on the first characteristic value. Furthermore, the main energy of the image is concentrated in the DC coefficient, i.e. the DC coefficient carries a lot of information of the compressed video, and therefore the influence factor a of the DC coefficient is larger than the influence factor b of the ac coefficient.
Similarly, a second direct current coefficient and a second alternating current coefficient in the second discrete cosine transform coefficient are extracted, and the second characteristic value is obtained through calculation according to the second direct current coefficient and the second alternating current coefficient, wherein the calculation formula is as follows:
Figure 892329DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 519619DEST_PATH_IMAGE022
it is referred to the second characteristic value that,
Figure 762382DEST_PATH_IMAGE023
is referred to as the second direct current coefficient,
Figure 158728DEST_PATH_IMAGE024
refers to the second ac coefficient, n +1 refers to the second I frame image,
Figure 94323DEST_PATH_IMAGE025
is referred to as the second I frame image
Figure 208910DEST_PATH_IMAGE026
The sub-blocks, c, and d, are the influence factors of the second dc coefficient on the second eigenvalue, and the influence factors of the second ac coefficient on the second eigenvalue, respectively. Likewise, the influence factor c of the dc coefficient is greater than the influence factor d of the ac coefficient. Since the first I frame image and the second I frame image are adjacent frame images, the first I frame image and the second I frame image are adjacent frame imagesTwo I-frame images are denoted by n, n +1, respectively.
Further, according to the first feature value and the second feature value obtained by calculation, a preset feature difference value between the first I-frame image and the second I-frame image is obtained by subtraction calculation, wherein a calculation formula is as follows:
Figure 491249DEST_PATH_IMAGE027
wherein the content of the first and second substances,
Figure 273260DEST_PATH_IMAGE028
is the preset characteristic difference. And finally, drawing according to the preset characteristic difference value to obtain the preset characteristic difference value curve.
Furthermore, the drawn preset characteristic difference value curve is subjected to noise reduction processing, so that interference data factors in the preset characteristic difference value curve are removed, and a comprehensive, reliable and effective curve basis is provided for subsequently extracting the image key frame. Since the frames in a shot may not change much in content, which results in that the difference in feature quantity between the frames is not very large, the fluctuation of the feature difference value between two adjacent frames is very small, which only indicates that the difference between the frames does exist, but does not contribute much to the detection of the shot boundary. At this time, it is necessary to remove the small fluctuation in the preset characteristic difference curve, that is, to preprocess and remove the noise in the curve, so as to prepare for the subsequent work. Illustratively, the interference factors in the preset characteristic difference curve are realized by using wavelet transformation, and the interference existing in the data can be completely removed by using threshold denoising, and only the peak point in the data is reserved. And finally, analyzing the denoising processing result to establish the preset key frame set.
By intelligently analyzing the preset tongue video collected by the video collecting equipment and extracting the key frame image of the tongue in the video, the reduction of the system processing information amount is realized on the basis of ensuring the comprehensive, accurate and effective system processing data, thereby improving the efficiency of the system processing data.
Step S300: constructing an intelligent registration fusion model by using a random sampling consistency algorithm principle;
step S400: acquiring point cloud data sets of the key frame images, using the point cloud data sets as input information of the intelligent registration fusion model, and analyzing the intelligent registration fusion model to obtain output information;
further, as shown in fig. 4, step S400 of the present invention further includes:
step S410: the intelligent registration fusion model randomly samples the point cloud data set to obtain a first sample set, and obtains a first parameter estimation result of the first model based on the first sample set;
step S420: removing the first sample set from the point cloud data set to obtain a first non-sample set, wherein the first non-sample set comprises a plurality of non-sample point cloud data;
step S430: sequentially calculating the distances from the non-sample point cloud data to the first model, and screening the non-sample point cloud data by combining a preset distance threshold to obtain a first consistency point set;
step S440: calculating a first data volume in the first consistency point set, and judging whether the first data volume meets a preset number threshold;
further, the invention also comprises the following steps:
step S441: if the first data volume does not meet the preset number threshold, obtaining a first sampling instruction;
step S442: and according to the first sampling instruction, the intelligent registration fusion model carries out random sampling from the point cloud data set.
Step S450: if the first data volume meets the preset number threshold, obtaining a first re-estimation instruction;
step S460: obtaining a second parameter estimation result of the first model based on the first consistency point set according to the first re-estimation instruction;
step S470: replacing the first parameter estimation result with the second parameter estimation result as the output information.
Specifically, a random sampling consistency algorithm is used to analyze a given set of the data being contended for, thereby satisfying the resolution of the model parameters. And constructing an intelligent registration fusion model based on a random sampling consistency algorithm principle, and providing a model foundation for constructing a three-dimensional model of the preset tongue based on the data information of the plurality of key frame images.
Firstly, point cloud data of each key frame image in the key frame images are obtained through analysis, and a point cloud data set is formed. And then taking the point cloud data set as input information of the intelligent registration fusion model, and analyzing the intelligent registration fusion model to obtain output information. Specifically, the intelligent registration fusion model firstly obtains a first sample set from the point cloud data set through random sampling, wherein the Su Songhu first sample set comprises point cloud data of a plurality of key frame images, and then a first parameter estimation result of the first model is obtained through calculation based on the first sample set. And then, removing the first sample set from the point cloud data set to obtain a first non-sample set, namely taking the rest point cloud data in the point cloud data set as the first non-sample set, and sequentially calculating the distance from each point to the first model. And then screening non-sample point cloud data meeting a preset distance threshold value to form the first consistency point set. The Su Songhu preset distance threshold is a distance range set after comprehensive analysis based on tongue picture conditions and the like, and is stored in the tongue three-dimensional modeling system in advance. And further counting the total number of non-sample data in the first consistency point set to obtain the first data volume, and judging whether the first data volume meets a preset number threshold value. When the first data volume meets the preset number threshold, the system automatically obtains a first re-estimation instruction, and is used for performing parameter calculation estimation on the three-dimensional model of the preset tongue again based on the first consistency point set to obtain a second parameter estimation result of the first model, and the second parameter estimation result replaces the first parameter estimation result to serve as the output information. Otherwise, when the first data volume does not meet the preset number threshold, the iterative loop still needs to be continued, so that the system automatically obtains a first sampling instruction for performing random sampling from the point cloud data set again.
Iterative optimization is carried out on the three-dimensional model building parameters of the preset tongue through random sampling consistency, and finally the estimation result of the model parameters is obtained, so that an accurate and effective data base is provided for building the tongue three-dimensional model, and the technical effect of providing the three-dimensional model building accuracy and practicability is achieved.
Step S500: extracting a model parameter estimation result in the output information, and obtaining a preset point cloud model based on the model parameter estimation result;
step S600: obtaining basic characteristic parameters of the video acquisition equipment, and performing texture mapping on the preset point cloud model based on the basic characteristic parameters to obtain a texture mapping result;
step S700: and taking the texture mapping result as a three-dimensional model of the preset tongue.
Specifically, a model parameter estimation result in output information of the intelligent registration fusion model is extracted, and a preset point cloud model of the preset tongue is constructed and obtained based on the model parameter estimation result. And then, collecting related shooting parameter characteristics of equipment for collecting the preset tongue, namely the video collecting equipment to obtain basic characteristic parameters, and further carrying out texture mapping on the preset point cloud model based on the basic characteristic parameters to obtain a texture mapping result. That is, the mutual corresponding relation between the three-dimensional object and the two-dimensional image is solved, and the mutual corresponding relation is influenced by the imaging model parameters of the acquisition equipment. And finally, taking the preset point cloud model after texture mapping, namely the texture mapping result, as the three-dimensional model of the preset tongue. Exemplarily, the texture mapping of the preset tongue is realized by selecting tongue image points in the video key frame image, correspondingly determining point information of the tongue image points in the preset point cloud model, and simultaneously combining internal and external parameters of the equipment and PCL texture mapping. By determining the corresponding relation between the tongue picture point in the key frame image and the three-dimensional preset cloud point model, the texture information in the two-dimensional key frame image is mapped to the three-dimensional preset cloud point model, and therefore three-dimensional modeling is achieved.
In summary, the tongue three-dimensional modeling method for multi-angle image fusion provided by the invention has the following technical effects:
acquiring a preset tongue video by acquiring the preset tongue and acquiring video information of the preset tongue through video acquisition equipment; establishing a preset key frame set of the preset tongue part based on the preset tongue part video, wherein the preset key frame set comprises a plurality of key frame images; constructing an intelligent registration fusion model by using a random sampling consistency algorithm principle; acquiring point cloud data sets of the key frame images, using the point cloud data sets as input information of the intelligent registration fusion model, and analyzing the intelligent registration fusion model to obtain output information; extracting a model parameter estimation result in the output information, and obtaining a preset point cloud model based on the model parameter estimation result; obtaining basic characteristic parameters of the video acquisition equipment, and performing texture mapping on the preset point cloud model based on the basic characteristic parameters to obtain a texture mapping result; and taking the texture mapping result as a three-dimensional model of the preset tongue. The video information acquisition is carried out on the preset tongue part through the video acquisition equipment, and the technical aim of providing a comprehensive and reliable information basis for the subsequent construction of a three-dimensional model of the preset tongue part is achieved. By intelligently analyzing the preset tongue video collected by the video collecting equipment and extracting the key frame image of the tongue in the video, the reduction of the system processing information amount is realized on the basis of ensuring the comprehensive, accurate and effective system processing data, thereby improving the efficiency of the system processing data. And further, by means of an intelligent registration fusion model, after the key frame image of the preset tongue is subjected to intelligent registration processing, parameter estimation of a three-dimensional model of the preset tongue is obtained, a preset point cloud model of the preset tongue is further constructed, and a preliminary modeling target of the preset tongue is realized. And then, mapping the key frame image texture of the preset tongue to the preset point cloud model by combining with the basic characteristic parameters of the video acquisition equipment, namely realizing texture mapping of the preset point cloud model, and finally obtaining the three-dimensional model of the preset tongue. The target of constructing a three-dimensional model of the preset tongue is realized by processing and analyzing the image information based on the preset tongue, and the technical effects of objectification and datamation of tongue picture information are achieved.
Example two
Based on the same inventive concept as the tongue three-dimensional modeling method for multi-angle image fusion in the foregoing embodiment, the present invention further provides a tongue three-dimensional modeling system for multi-angle image fusion, where the tongue three-dimensional modeling system is in communication connection with a video capture device, and please refer to fig. 5, where the system includes:
the acquisition module M100 is used for acquiring a preset tongue, and acquiring video information of the preset tongue through the video acquisition equipment to obtain a preset tongue video;
an extracting module M200, where the extracting module M200 is configured to establish a preset key frame set of the preset tongue based on the preset tongue video, where the preset key frame set includes multiple key frame images;
a building module M300, wherein the building module M300 is used for building an intelligent registration fusion model by using a random sampling consistency algorithm principle;
the processing module M400 is used for obtaining point cloud data sets of the plurality of key frame images, using the point cloud data sets as input information of the intelligent registration and fusion model, and obtaining output information through analysis of the intelligent registration and fusion model;
an obtaining module M500, where the obtaining module M500 is configured to extract a model parameter estimation result in the output information, and obtain a preset point cloud model based on the model parameter estimation result;
the mapping module M600 is used for obtaining basic characteristic parameters of the video acquisition equipment and performing texture mapping on the preset point cloud model based on the basic characteristic parameters to obtain a texture mapping result;
an obtaining module M700, where the obtaining module M700 is configured to use the texture mapping result as a three-dimensional model of the preset tongue.
Further, the extracting module M200 in the system is further configured to:
encoding the preset tongue video by utilizing a dynamic image expert group, and establishing an encoding unit set according to an encoding result, wherein the encoding unit set comprises a plurality of encoding units;
sequentially extracting a plurality of I frame images in the plurality of coding units, sequentially decoding the plurality of I frame images, and calculating and drawing a preset characteristic difference curve according to a decoding result;
and denoising to obtain a denoising processing result of the preset characteristic difference curve, analyzing the denoising processing result, and establishing the preset key frame set according to the analysis result.
Further, the extracting module M200 in the system is further configured to:
sequentially extracting a first I frame image and a second I frame image in the I frame images, wherein the first I frame image and the second I frame image are adjacent frame images;
sequentially carrying out discrete cosine transform on the first I frame image and the second I frame image to respectively obtain a first discrete cosine transform coefficient and a second discrete cosine transform coefficient;
calculating to obtain a first characteristic value of the first I frame image based on the first discrete cosine transform coefficient, and calculating to obtain a second characteristic value of the second I frame image based on the second discrete cosine transform coefficient;
subtracting the first characteristic value and the second characteristic value to obtain a preset characteristic difference value of the first I frame image and the second I frame image;
and drawing to obtain the preset characteristic difference curve according to the preset characteristic difference.
Further, the extracting module M200 in the system is further configured to:
extracting a first direct current coefficient and a first alternating current coefficient in the first discrete cosine transform coefficient;
and calculating to obtain the first characteristic value according to the first direct current coefficient and the first alternating current coefficient, wherein a calculation formula is as follows:
Figure 114177DEST_PATH_IMAGE029
wherein the content of the first and second substances,
Figure 981639DEST_PATH_IMAGE030
it is referred to the first characteristic value that,
Figure 566204DEST_PATH_IMAGE031
it is meant that the first direct current coefficient,
Figure 937142DEST_PATH_IMAGE032
refers to said first ac coefficient, n refers to said first I frame picture,
Figure 948961DEST_PATH_IMAGE033
is referred to as the second I frame image
Figure 363823DEST_PATH_IMAGE033
A sub-block, a is an influence factor of the first direct current coefficient on the first characteristic value, and b is an influence factor of the first alternating current coefficient on the first characteristic value;
extracting a second direct current coefficient and a second alternating current coefficient in the second discrete cosine transform coefficient;
and calculating to obtain the second characteristic value according to the second direct current coefficient and the second alternating current coefficient, wherein a calculation formula is as follows:
Figure 752079DEST_PATH_IMAGE034
wherein the content of the first and second substances,
Figure 977524DEST_PATH_IMAGE035
it is referred to the second characteristic value that,
Figure 160244DEST_PATH_IMAGE036
is referred to as the second direct current coefficient,
Figure 736718DEST_PATH_IMAGE037
refers to the second ac coefficient, n +1 refers to the second I frame image,
Figure 663086DEST_PATH_IMAGE038
is referred to as the second I frame image
Figure 8617DEST_PATH_IMAGE039
The sub-blocks, c, and d, are the influence factors of the second dc coefficient on the second eigenvalue, and the influence factors of the second ac coefficient on the second eigenvalue, respectively.
Further, the extracting module M200 in the system is further configured to:
calculating to obtain the preset feature difference value according to the first feature value and the second feature value, wherein a calculation formula is as follows:
Figure 129282DEST_PATH_IMAGE040
wherein the content of the first and second substances,
Figure 927474DEST_PATH_IMAGE041
is the preset characteristic difference.
Further, the processing module M400 in the system is further configured to:
the intelligent registration fusion model randomly samples the point cloud data set to obtain a first sample set, and obtains a first parameter estimation result of the first model based on the first sample set;
removing the first sample set from the point cloud data set to obtain a first non-sample set, wherein the first non-sample set comprises a plurality of non-sample point cloud data;
sequentially calculating the distances from the non-sample point cloud data to the first model, and screening the non-sample point cloud data by combining a preset distance threshold to obtain a first consistency point set;
calculating a first data volume in the first consistency point set, and judging whether the first data volume meets a preset number threshold;
if the first data volume meets the preset number threshold, obtaining a first re-estimation instruction;
obtaining a second parameter estimation result of the first model based on the first consistency point set according to the first re-estimation instruction;
replacing the first parameter estimation result with the second parameter estimation result as the output information.
Further, the processing module M400 in the system is further configured to:
if the first data volume does not meet the preset number threshold, obtaining a first sampling instruction;
and according to the first sampling instruction, the intelligent registration fusion model carries out random sampling from the point cloud data set.
The embodiments in the present description are described in a progressive manner, and each embodiment focuses on the difference from other embodiments, the foregoing tongue three-dimensional modeling method for multi-angle image fusion in the first embodiment of fig. 1 and the specific example are also applicable to the tongue three-dimensional modeling system for multi-angle image fusion in the present embodiment, and through the foregoing detailed description of the tongue three-dimensional modeling method for multi-angle image fusion, those skilled in the art can clearly know the tongue three-dimensional modeling system for multi-angle image fusion in the present embodiment, so for the brevity of description, detailed description is omitted here. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the present invention and its equivalent technology, it is intended that the present invention also include such modifications and variations.

Claims (8)

1. A tongue three-dimensional modeling method for multi-angle image fusion is characterized in that the tongue three-dimensional modeling method is applied to a tongue three-dimensional modeling system, the tongue three-dimensional modeling system is in communication connection with video acquisition equipment, and the method comprises the following steps:
acquiring a preset tongue, and acquiring video information of the preset tongue through the video acquisition equipment to obtain a preset tongue video;
establishing a preset key frame set of the preset tongue part based on the preset tongue part video, wherein the preset key frame set comprises a plurality of key frame images;
constructing an intelligent registration fusion model by using a random sampling consistency algorithm principle;
acquiring point cloud data sets of the key frame images, using the point cloud data sets as input information of the intelligent registration fusion model, and analyzing the intelligent registration fusion model to obtain output information;
extracting a model parameter estimation result in the output information, and obtaining a preset point cloud model based on the model parameter estimation result;
obtaining basic characteristic parameters of the video acquisition equipment, and performing texture mapping on the preset point cloud model based on the basic characteristic parameters to obtain a texture mapping result;
and taking the texture mapping result as a three-dimensional model of the preset tongue.
2. The tongue three-dimensional modeling method according to claim 1, wherein said building a preset keyframe set of the preset tongue based on the preset tongue video comprises:
encoding the preset tongue video by utilizing a dynamic image expert group, and establishing an encoding unit set according to an encoding result, wherein the encoding unit set comprises a plurality of encoding units;
sequentially extracting a plurality of I frame images in the plurality of coding units, sequentially decoding the plurality of I frame images, and calculating and drawing a preset characteristic difference curve according to a decoding result;
and denoising to obtain a denoising processing result of the preset characteristic difference curve, analyzing the denoising processing result, and establishing the preset key frame set according to the analysis result.
3. The tongue three-dimensional modeling method according to claim 2, wherein said sequentially decoding the plurality of I-frame images and calculating and drawing a preset feature difference curve according to the decoding result comprises:
sequentially extracting a first I frame image and a second I frame image in the I frame images, wherein the first I frame image and the second I frame image are adjacent frame images;
sequentially carrying out discrete cosine transform on the first I frame image and the second I frame image to respectively obtain a first discrete cosine transform coefficient and a second discrete cosine transform coefficient;
calculating a first characteristic value of the first I frame image based on the first discrete cosine transform coefficient, and calculating a second characteristic value of the second I frame image based on the second discrete cosine transform coefficient;
subtracting the first characteristic value and the second characteristic value to obtain a preset characteristic difference value of the first I frame image and the second I frame image;
and drawing to obtain the preset characteristic difference curve according to the preset characteristic difference.
4. The three-dimensional tongue modeling method according to claim 3, wherein said calculating a first feature value of said first I-frame image based on said first DCT coefficient and a second feature value of said second I-frame image based on said second DCT coefficient comprises:
extracting a first direct current coefficient and a first alternating current coefficient in the first discrete cosine transform coefficient;
and calculating to obtain the first characteristic value according to the first direct current coefficient and the first alternating current coefficient, wherein a calculation formula is as follows:
Figure 463929DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 663966DEST_PATH_IMAGE002
it is referred to the first characteristic value that,
Figure 454067DEST_PATH_IMAGE003
it is meant that the first direct current coefficient,
Figure 5134DEST_PATH_IMAGE004
refers to said first ac coefficient, n refers to said first I frame picture,
Figure 273305DEST_PATH_IMAGE005
is referred to as the second I frame image
Figure 829313DEST_PATH_IMAGE006
A sub-block, a is an influence factor of the first direct current coefficient on the first characteristic value, and b is an influence factor of the first alternating current coefficient on the first characteristic value;
extracting a second direct current coefficient and a second alternating current coefficient in the second discrete cosine transform coefficient;
and calculating to obtain the second characteristic value according to the second direct current coefficient and the second alternating current coefficient, wherein a calculation formula is as follows:
Figure 55895DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 563100DEST_PATH_IMAGE008
it is referred to the second characteristic value that,
Figure 900540DEST_PATH_IMAGE009
is referred to as the second direct current coefficient,
Figure 75170DEST_PATH_IMAGE010
refers to the second ac coefficient, n +1 refers to the second I frame image,
Figure 941495DEST_PATH_IMAGE011
is referred to as the second I frame image
Figure 467154DEST_PATH_IMAGE012
The sub-blocks, c, and d, are the influence factors of the second dc coefficient on the second eigenvalue, and the influence factors of the second ac coefficient on the second eigenvalue, respectively.
5. The tongue three-dimensional modeling method according to claim 4, further comprising:
calculating to obtain the preset feature difference value according to the first feature value and the second feature value, wherein a calculation formula is as follows:
Figure 590310DEST_PATH_IMAGE013
wherein the content of the first and second substances,
Figure 619446DEST_PATH_IMAGE014
is the preset characteristic difference.
6. The tongue three-dimensional modeling method according to claim 1, wherein the obtaining of the point cloud data sets of the plurality of keyframe images, using the point cloud data sets as input information of the intelligent registration fusion model, and obtaining output information through analysis of the intelligent registration fusion model comprises:
the intelligent registration fusion model randomly samples the point cloud data set to obtain a first sample set, and obtains a first parameter estimation result of the first model based on the first sample set;
removing the first sample set from the point cloud data set to obtain a first non-sample set, wherein the first non-sample set comprises a plurality of non-sample point cloud data;
sequentially calculating the distances from the non-sample point cloud data to the first model, and screening the non-sample point cloud data by combining a preset distance threshold to obtain a first consistency point set;
calculating a first data volume in the first consistency point set, and judging whether the first data volume meets a preset number threshold value;
if the first data volume meets the preset number threshold, obtaining a first re-estimation instruction;
obtaining a second parameter estimation result of the first model based on the first consistency point set according to the first re-estimation instruction;
replacing the first parameter estimation result with the second parameter estimation result as the output information.
7. The tongue three-dimensional modeling method according to claim 6, wherein after said calculating a first amount of data in said first set of consistency points and determining whether said first amount of data meets a predetermined number threshold, further comprising:
if the first data volume does not meet the preset number threshold, obtaining a first sampling instruction;
and according to the first sampling instruction, the intelligent registration fusion model carries out random sampling from the point cloud data set.
8. A multi-angle image-fused tongue three-dimensional modeling system, comprising:
the acquisition module is used for acquiring a preset tongue part and acquiring video information of the preset tongue part through video acquisition equipment to obtain a preset tongue part video;
an extraction module, configured to establish a preset key frame set of the preset tongue based on the preset tongue video, where the preset key frame set includes a plurality of key frame images;
the construction module is used for constructing an intelligent registration fusion model by utilizing a random sampling consistency algorithm principle;
the processing module is used for obtaining point cloud data sets of the key frame images, using the point cloud data sets as input information of the intelligent registration and fusion model, and obtaining output information through analysis of the intelligent registration and fusion model;
the obtaining module is used for extracting a model parameter estimation result in the output information and obtaining a preset point cloud model based on the model parameter estimation result;
the mapping module is used for obtaining basic characteristic parameters of the video acquisition equipment and performing texture mapping on the preset point cloud model based on the basic characteristic parameters to obtain a texture mapping result;
an obtaining module, configured to use the texture mapping result as a three-dimensional model of the preset tongue.
CN202211451768.1A 2022-11-21 2022-11-21 Tongue three-dimensional modeling method and system for multi-angle image fusion Active CN115578523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211451768.1A CN115578523B (en) 2022-11-21 2022-11-21 Tongue three-dimensional modeling method and system for multi-angle image fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211451768.1A CN115578523B (en) 2022-11-21 2022-11-21 Tongue three-dimensional modeling method and system for multi-angle image fusion

Publications (2)

Publication Number Publication Date
CN115578523A true CN115578523A (en) 2023-01-06
CN115578523B CN115578523B (en) 2023-03-10

Family

ID=84589768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211451768.1A Active CN115578523B (en) 2022-11-21 2022-11-21 Tongue three-dimensional modeling method and system for multi-angle image fusion

Country Status (1)

Country Link
CN (1) CN115578523B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830945A (en) * 2023-01-14 2023-03-21 苏州海易泰克机电设备有限公司 Intelligent monitoring management method and system for flight simulator
CN116449332A (en) * 2023-06-14 2023-07-18 西安晟昕科技股份有限公司 Airspace target detection method based on MIMO radar
CN116485778A (en) * 2023-04-27 2023-07-25 鄂东医疗集团市中心医院 Imaging detection method, imaging detection system, computer equipment and storage medium
CN116727691A (en) * 2023-07-11 2023-09-12 浙江拓博环保科技有限公司 Metal 3D printing method and system based on digital management
CN116952988A (en) * 2023-09-21 2023-10-27 斯德拉马机械(太仓)有限公司 2D line scanning detection method and system for ECU (electronic control Unit) product
CN117649494A (en) * 2024-01-29 2024-03-05 南京信息工程大学 Reconstruction method and system of three-dimensional tongue body based on point cloud pixel matching

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729881A (en) * 2013-12-28 2014-04-16 北京工业大学 Tongue body dynamic three-dimensional reconstruction system oriented to tongue inspection in traditional Chinese medicine
CN104992441A (en) * 2015-07-08 2015-10-21 华中科技大学 Real human body three-dimensional modeling method specific to personalized virtual fitting
CN109801291A (en) * 2019-03-12 2019-05-24 西安交通大学 A kind of acquisition methods moving abrasive grain multi-surface three-dimensional appearance
CN111476259A (en) * 2019-11-22 2020-07-31 上海大学 Tooth mark tongue recognition algorithm based on convolutional neural network
CN112085849A (en) * 2020-07-28 2020-12-15 航天图景(北京)科技有限公司 Real-time iterative three-dimensional modeling method and system based on aerial video stream and readable medium
US20210360198A1 (en) * 2020-05-12 2021-11-18 True Meeting Inc. Virtual 3d communications using models and texture maps of participants

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729881A (en) * 2013-12-28 2014-04-16 北京工业大学 Tongue body dynamic three-dimensional reconstruction system oriented to tongue inspection in traditional Chinese medicine
CN104992441A (en) * 2015-07-08 2015-10-21 华中科技大学 Real human body three-dimensional modeling method specific to personalized virtual fitting
CN109801291A (en) * 2019-03-12 2019-05-24 西安交通大学 A kind of acquisition methods moving abrasive grain multi-surface three-dimensional appearance
CN111476259A (en) * 2019-11-22 2020-07-31 上海大学 Tooth mark tongue recognition algorithm based on convolutional neural network
US20210360198A1 (en) * 2020-05-12 2021-11-18 True Meeting Inc. Virtual 3d communications using models and texture maps of participants
CN112085849A (en) * 2020-07-28 2020-12-15 航天图景(北京)科技有限公司 Real-time iterative three-dimensional modeling method and system based on aerial video stream and readable medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
向卓龙 等: "利用自由拍摄二维图像实现三维点云的纹理贴图", 《激光与光电子学进展》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830945A (en) * 2023-01-14 2023-03-21 苏州海易泰克机电设备有限公司 Intelligent monitoring management method and system for flight simulator
CN116485778A (en) * 2023-04-27 2023-07-25 鄂东医疗集团市中心医院 Imaging detection method, imaging detection system, computer equipment and storage medium
CN116449332A (en) * 2023-06-14 2023-07-18 西安晟昕科技股份有限公司 Airspace target detection method based on MIMO radar
CN116449332B (en) * 2023-06-14 2023-08-25 西安晟昕科技股份有限公司 Airspace target detection method based on MIMO radar
CN116727691A (en) * 2023-07-11 2023-09-12 浙江拓博环保科技有限公司 Metal 3D printing method and system based on digital management
CN116727691B (en) * 2023-07-11 2023-11-17 浙江拓博环保科技有限公司 Metal 3D printing method and system based on digital management
CN116952988A (en) * 2023-09-21 2023-10-27 斯德拉马机械(太仓)有限公司 2D line scanning detection method and system for ECU (electronic control Unit) product
CN116952988B (en) * 2023-09-21 2023-12-08 斯德拉马机械(太仓)有限公司 2D line scanning detection method and system for ECU (electronic control Unit) product
CN117649494A (en) * 2024-01-29 2024-03-05 南京信息工程大学 Reconstruction method and system of three-dimensional tongue body based on point cloud pixel matching
CN117649494B (en) * 2024-01-29 2024-04-19 南京信息工程大学 Reconstruction method and system of three-dimensional tongue body based on point cloud pixel matching

Also Published As

Publication number Publication date
CN115578523B (en) 2023-03-10

Similar Documents

Publication Publication Date Title
CN115578523B (en) Tongue three-dimensional modeling method and system for multi-angle image fusion
CN108921130B (en) Video key frame extraction method based on saliency region
CN108932699B (en) Three-dimensional matching harmonic filtering image denoising method based on transform domain
CN108921062B (en) Gait recognition method combining multiple gait features and cooperative dictionary
CN108510496B (en) Fuzzy detection method for SVD (singular value decomposition) based on image DCT (discrete cosine transformation) domain
Omidyeganeh et al. Video keyframe analysis using a segment-based statistical metric in a visually sensitive parametric space
CN106777159B (en) Video clip retrieval and positioning method based on content
CN111008651A (en) Image reproduction detection method based on multi-feature fusion
CN110121109A (en) Towards the real-time source tracing method of monitoring system digital video, city video monitoring system
CN105678720A (en) Image matching judging method and image matching judging device for panoramic stitching
CN117274759A (en) Infrared and visible light image fusion system based on distillation-fusion-semantic joint driving
Maksimovic et al. New approach of estimating edge detection threshold and application of adaptive detector depending on image complexity
Jin et al. Perceptual Gradient Similarity Deviation for Full Reference Image Quality Assessment.
CN109523508B (en) Dense light field quality evaluation method
CN116091719B (en) River channel data management method and system based on Internet of things
Nguyen et al. A novel automatic concrete surface crack identification using isotropic undecimated wavelet transform
Wang Research on the computer vision cracked eggs detecting method
CN115909401A (en) Cattle face identification method and device integrating deep learning, electronic equipment and medium
KR101311309B1 (en) Detecting method of image operation and detecting device of image operation
CN111783580B (en) Pedestrian identification method based on human leg detection
Li Image contrast enhancement algorithm based on gm (1, 1) and power exponential dynamic decision
Wan et al. A video forensic technique for detecting frame integrity using human visual system-inspired measure
CN109035229B (en) Automatic evaluation method for cow body condition based on Fourier descriptor
Chetouani et al. A new scheme for no reference image quality assessment
Hernandez et al. Color image segmentation using multispectral random field texture model & color content features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant