CN113256488B - Generation method and device of prediction model, side appearance prediction method and electronic equipment - Google Patents

Generation method and device of prediction model, side appearance prediction method and electronic equipment Download PDF

Info

Publication number
CN113256488B
CN113256488B CN202110683147.5A CN202110683147A CN113256488B CN 113256488 B CN113256488 B CN 113256488B CN 202110683147 A CN202110683147 A CN 202110683147A CN 113256488 B CN113256488 B CN 113256488B
Authority
CN
China
Prior art keywords
labial
curve
corrected
incisor
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110683147.5A
Other languages
Chinese (zh)
Other versions
CN113256488A (en
Inventor
何建桥
宋万忠
李宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202110683147.5A priority Critical patent/CN113256488B/en
Publication of CN113256488A publication Critical patent/CN113256488A/en
Application granted granted Critical
Publication of CN113256488B publication Critical patent/CN113256488B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/04
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor

Abstract

The application provides a generation method and device of a prediction model, a side appearance prediction method and electronic equipment. The generation method of the prediction model comprises the following steps: acquiring training sample data; the training sample data comprises a characteristic image before correction and a characteristic image after correction; the pre-correction feature image comprises a position offset relation among a labial curve of the first incisor before correction, an adjusted labial curve of the first incisor and a first labial contour curve before correction; the corrected characteristic image comprises a position offset relation between a labial curve of the corrected first incisor and a corrected first labial contour curve; and training the initial model based on the training sample data to obtain a prediction model. The model is trained through the characteristic image before correction and the characteristic image after correction, so that the model can pay more attention to the influence of the position movement amount of the incisor on the lip contour, and compared with the mode of predicting through the soft and hard tissue average value in the prior art, the prediction accuracy and reliability are higher.

Description

Generation method and device of prediction model, side appearance prediction method and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for generating a prediction model, a method for predicting a side appearance, and an electronic device.
Background
Along with the improvement of living standard, the public attention to the appearance is higher and higher. The positions of teeth are closely related to facial features, the positions of upper and lower lips of a human face have a large influence on the aesthetic degree of facial features (as shown in figure 1), and the improvement of the aesthetic degree of the facial features is a main appeal of a plurality of orthodontic patients, so that the dental rows are aligned through orthodontic treatment, the soft tissue features of the patients are improved, and the facial feature effect can be obviously improved.
In practice, the patient wants to see the face side appearance improving effect after the correction before the tooth correction. For this reason, various face side appearance prediction methods have been proposed in sequence, such as predicting a side appearance contour line by a statistical method of multivariate regression based on a ratio of soft and hard tissue change. For another example, the side appearance is predicted based on a digital model of the three-dimensional face of the human face; machine learning is employed to predict the improvement effect of orthodontic treatment on side appearance. However, in the above methods, the correction target is made according to the average value of soft and hard tissues of normal people in the aspect of side appearance prediction, and the prediction of the improvement of the side appearance mainly depends on subjective experience and aesthetic level, and objective, accurate and quantitative description is lacked.
Disclosure of Invention
The embodiment of the application aims to provide a generation method and device of a prediction model, a side appearance prediction method and electronic equipment, so as to improve the accuracy of facial prediction after correction of a patient.
The invention is realized by the following steps:
in a first aspect, an embodiment of the present application provides a method for generating a prediction model, where the prediction model is used to predict an corrected feature image, and the method includes: acquiring training sample data; the training sample data comprises a characteristic image before correction and a characteristic image after correction; the pre-correction feature image comprises a position offset relation among a labial curve of the first incisor before correction, an adjusted labial curve of the first incisor and a first labial contour curve before correction; the corrected characteristic image comprises a position offset relation between a labial curve of the corrected first incisor and a corrected first labial contour curve; and training an initial model based on the training sample data to obtain the prediction model.
The characteristic image before correction comprises the position offset relation among the labial side curve of the first incisor before correction, the labial side curve of the first incisor after adjustment and the first labial contour curve before correction, and the characteristic image after correction comprises the position offset relation among the labial side curve of the first incisor after correction and the first labial contour curve after correction. Therefore, the model is trained through the pre-correction feature image and the post-correction feature image, the model can pay more attention to the influence of the position movement amount of the incisor on the lip contour, the position of the corrected lip contour curve can be accurately predicted by the subsequent prediction model according to the position movement amount of the incisor, and compared with the prediction in the prior art through the soft and hard tissue average value mode, the prediction accuracy and reliability are higher.
With reference to the technical solution provided by the first aspect, in some possible implementation manners, the acquiring training sample data includes: acquiring an image of a patient's pre-correction skull; wherein the pre-correction skull image comprises marked labial curves of the first incisor before correction, labial curves of the first incisor after adjustment, and a first labial contour curve before correction; generating the pre-correction feature image based on the pre-correction skull image; acquiring an image of the corrected skull of the patient; wherein the corrected skull image comprises a marked labial curve of the corrected first incisor and a marked corrected first lip contour curve; and generating the corrected characteristic image based on the corrected skull image.
In the embodiment of the application, a lip side curve of a first incisor before correction is marked, a lip side curve of the first incisor after adjustment and a corrected skull image of a first lip contour curve before correction are obtained so as to accurately and conveniently generate a characteristic image before correction; and acquiring a lip side curve of the first incisor marked after correction and a first lip contour curve after correction so as to accurately and conveniently generate a corrected characteristic image.
With reference to the technical solution provided by the first aspect, in some possible implementations, the generating the pre-correction feature image based on the pre-correction skull image includes: generating six offset matrixes based on the image of the head before correction; the six deviation matrixes respectively correspond to X-axis deviation and Y-axis deviation between every two three curves; the three curves are a labial curve of the first incisor before correction, a labial curve of the first incisor after adjustment and a first labial contour curve before correction; generating six corresponding gray-scale images based on the six offset matrixes; superposing the six gray level images to obtain the characteristic image before correction; correspondingly, the generating the corrected feature image based on the corrected skull image comprises: generating two offset matrixes based on the corrected skull image; the two offset matrixes respectively correspond to the X-axis offset and the Y-axis offset between the two curves; the two curves are a labial curve of the corrected first incisor and a contour curve of the corrected first lip; generating two corresponding gray-scale images based on the two offset matrixes; and superposing the two gray level images to obtain the corrected characteristic image.
In the embodiment of the application, the X-axis offset and the Y-axis offset between the curves are determined based on the curves in the skull image, so that an offset matrix corresponding to the offset is obtained, then the offset matrix is converted into the gray level image, and finally the gray level image is overlapped to obtain the characteristic image. Through the processing mode, the generated characteristic image contains accurate position offset relation among all curves, and the mode has simple processing process and higher processing efficiency.
With reference to the technical solution provided by the first aspect, in some possible implementations, the generating six offset matrices based on the pre-correction skull image includes: uniformly sampling three curves, namely a labial curve of the first incisor before correction, a labial curve of the first incisor after adjustment and a first labial contour curve before correction; the number of the samples is N, and N is a positive integer; acquiring coordinates of N points of the labial curve of the first incisor before correction, the coordinates of N points of the labial curve of the first incisor after adjustment and the coordinates of N points of the first labial contour curve before correction; generating six offset matrices based on the coordinates of the N points of the labial curve of the first incisor before correction, the coordinates of the N points of the labial curve of the first incisor after adjustment, and the coordinates of the N points of the first labial contour curve before correction; correspondingly, the generating two offset matrixes based on the corrected skull image comprises: uniformly sampling the labial curve of the corrected first incisor and the corrected first labial contour curve; wherein the number of samples is M, and M is a positive integer; acquiring coordinates of M points of the labial curve of the corrected first incisor and coordinates of M points of the corrected first labial contour curve; and generating the two offset matrixes based on the coordinates of the M points in the lip side curve of the first incisor after correction and the coordinates of the M points in the first lip contour curve after correction.
In the embodiment of the present application, each curve is uniformly sampled, and then the coordinates of each sampling point are used to generate the corresponding offset matrix. Because the lines of the curves are different in length and shape, the offset between the curves can be converted into the offset between coordinate points on the curves by the method, and therefore the accurate offset between the curves can be obtained conveniently.
With reference to the technical solution provided by the first aspect, in some possible implementations, the step of generating a corresponding grayscale image based on the offset matrix includes: dividing each value in the offset matrix by a preset maximum offset to obtain a conversion value; wherein the interval of the conversion value is [0, 1 ]; multiplying the conversion numerical value by the maximum gray value to obtain a gray value corresponding to each conversion numerical value; and mapping each gray value in the offset matrix to obtain a gray image corresponding to the offset matrix.
In the embodiment of the application, each numerical value in the offset matrix is divided by a preset maximum offset to obtain a conversion numerical value between [0, 1], the conversion numerical value is multiplied by a maximum gray value to obtain a gray value corresponding to each conversion numerical value, and finally, each gray value is mapped to obtain a converted gray image. The gray scale image obtained in this way can have one-to-one correspondence between each pixel element in the image and each array element in the offset matrix.
With reference to the technical solution provided by the first aspect, in some possible implementations, the acquiring an image of a corrected skull of a patient includes: acquiring an initial image of the skull scan of the patient before correction; identifying a contour of a first incisor before correction and a first lip contour curve before correction in the initial image before correction; outlining the adjusted first incisor based on the marking operation of the user; extracting a labial curve in the contour of the first incisor before correction and a labial curve in the contour of the first incisor after adjustment, and generating the image of the skull before correction by combining the first labial contour curve before correction; accordingly, the acquiring of the corrected skull image of the patient comprises: acquiring an initial image of the skull scanned by a patient after correction; identifying a contour of the corrected first incisor and the corrected first lip contour curve in the corrected initial image; and extracting a lip side curve in the profile of the corrected first incisor, and generating the corrected skull image by combining the corrected first lip profile curve.
In an embodiment of the present application, after acquiring an initial image of a scan of the skull by a patient; the contour of the first incisor and the first lip contour curve in the initial image may be automatically identified. By the mode, the efficiency can be improved, and a user does not need to manually mark all the outlines and curves.
In combination with the technical solution provided by the first aspect, in some possible implementations, the first incisor is an upper incisor or a lower incisor of the patient.
In a second aspect, an embodiment of the present application provides a side-appearance prediction method, including: acquiring a pre-correction feature image of a target patient; the pre-correction feature image comprises a position offset relation among a labial curve of a first incisor before correction, an adjusted labial curve of the first incisor and a first labial contour curve before correction; inputting the pre-correction feature image into a prediction model obtained by the method for generating the prediction model provided by the embodiment of the first aspect, so as to obtain a predicted post-correction feature image; the corrected characteristic image comprises a position offset relation between a labial curve of the first incisor and a corrected contour curve of the first lip; and obtaining the predicted corrected skull side appearance image based on the predicted corrected characteristic image.
In a third aspect, an embodiment of the present application provides an apparatus for generating a prediction model, where the prediction model is used to predict an corrected feature image, and the apparatus includes: the first acquisition module is used for acquiring training sample data; the training sample data comprises a characteristic image before correction and a characteristic image after correction; the pre-correction feature image comprises a position offset relation among a labial curve of the first incisor before correction, an adjusted labial curve of the first incisor and a first labial contour curve before correction; the corrected characteristic image comprises a position offset relation between a labial curve of the corrected first incisor and a corrected first labial contour curve; and the generating module is used for training the initial model based on the training sample data to obtain the prediction model.
In a fourth aspect, an embodiment of the present application provides a side appearance prediction apparatus, including: the second acquisition module is used for acquiring the pre-correction feature image of the target patient; the pre-correction feature image comprises a position offset relation among a labial curve of a first incisor before correction, an adjusted labial curve of the first incisor and a first labial contour curve before correction; the prediction module is used for inputting the pre-correction feature image into a prediction model obtained by the generation method of the prediction model provided by the embodiment of the first aspect to obtain a predicted post-correction feature image; the corrected characteristic image comprises a position offset relation between a labial curve of the first incisor and a corrected contour curve of the first lip; and the conversion module is used for obtaining the predicted corrected skull side appearance image based on the predicted corrected characteristic image.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory, the processor and the memory connected; the memory is used for storing programs; the processor is configured to call a program stored in the memory to perform the method as provided in the first aspect embodiment and/or the second aspect embodiment.
In a sixth aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the method as provided in the foregoing first aspect embodiment and/or second aspect embodiment.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic view of different human faces. Wherein, (a) is a schematic diagram of a first human face profile; (b) is a schematic diagram of a second human face profile; (c) is a schematic view of a third human face profile.
Fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 3 is a flowchart illustrating steps of a method for predicting a profile according to an embodiment of the present disclosure.
Fig. 4 is a flowchart illustrating steps of a method for generating a prediction model according to an embodiment of the present disclosure.
Fig. 5 is a flowchart illustrating steps of a method for generating a feature image according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a measurement apparatus according to an embodiment of the present application.
Fig. 7 is a schematic view of a measuring apparatus after an initial image before correction is introduced according to an embodiment of the present disclosure.
Fig. 8 is a schematic diagram of an image of a skull before correction according to an embodiment of the present application.
Fig. 9 is a data display diagram corresponding to a row of data in different offset matrices corresponding to upper incisors according to an embodiment of the present application. Wherein (a) is an offset matrix
Figure F_210616164154532_532036001
Data show graphs corresponding to the data in line 128; (b) is an offset matrix
Figure F_210616164154645_645246002
Data in line 128 corresponds to a data presentation graph. (c) Is an offset matrix
Figure F_210616164154750_750793003
Data in line 128 corresponds to a data presentation graph.
Fig. 10 is a data display diagram corresponding to a row of data in different offset matrices corresponding to an undercut provided by an embodiment of the present application. Wherein (a) is an offset matrix
Figure F_210616164154907_907081004
Data show graphs corresponding to the data in line 128; (b) is an offset matrix
Figure F_210616164155032_032086005
Data show graphs corresponding to the data in line 128; (c) is an offset matrix
Figure F_210616164155125_125800006
Data in line 128 corresponds to a data presentation graph.
Fig. 11 is a schematic diagram of a grayscale image corresponding to three different offset matrices provided in this embodiment of the present application. (a) A schematic diagram of a gray scale image corresponding to a first offset matrix; (b) a schematic diagram of a gray scale image corresponding to the second offset matrix;
(c) is a schematic diagram of a grayscale image corresponding to the third offset matrix.
Fig. 12 is a schematic view of a measuring apparatus after introducing the corrected initial image according to an embodiment of the present disclosure.
Fig. 13 is a schematic diagram of a post-correction skull image according to an embodiment of the present disclosure.
Fig. 14 is a schematic diagram of a grayscale image corresponding to two different shift matrices provided in this embodiment. (a) A schematic diagram of a gray scale image corresponding to the fourth offset matrix; (b) a schematic diagram of a grayscale image corresponding to the fifth offset matrix.
Fig. 15 is a block diagram of a prediction model generation apparatus according to an embodiment of the present application.
Fig. 16 is a block diagram of a profile prediction apparatus according to an embodiment of the present application.
Icon: 100-an electronic device; 110-a processor; 120-a memory; 400-means for generating a prediction model; 401-a first obtaining module; 402-a generation module; 500-a profile prediction device; 501-a second obtaining module; 502-a prediction module; 503-conversion module.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Since the correction target is made according to the average value of soft and hard tissues of normal people in the side appearance prediction at present, the prediction of the improvement of the side appearance mainly depends on subjective experience and aesthetic level, and objective, accurate and quantitative description is lacked. The present inventors have studied and found that the following examples are proposed to solve the above problems.
Referring to fig. 2, a schematic block diagram of an electronic device 100 applying a method for predicting a profile and/or generating a prediction model according to an embodiment of the present disclosure is provided. In the embodiment of the present application, the electronic Device 100 may be, but is not limited to, a Personal Computer (PC), a server, a tablet Computer, a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), and the like. Structurally, electronic device 100 may include a processor 110 and a memory 120.
The processor 110 and the memory 120 are electrically connected directly or indirectly to enable data transmission or interaction, for example, the components may be electrically connected to each other via one or more communication buses or signal lines. The side-appearance prediction means and/or the generation means of the prediction model comprise at least one software module that can be stored in the memory 120 in the form of software or Firmware (Firmware) or solidified in an Operating System (OS) of the electronic device 100. The processor 110 is configured to execute executable modules stored in the memory 120, such as software functional modules and computer programs included in the side appearance prediction apparatus, so as to implement the side appearance prediction method. For example, the prediction model generation device includes a software function module, a computer program, and the like, to realize the prediction model generation method. The processor 110 may execute the computer program upon receiving the execution instruction.
The processor 110 may be an integrated circuit chip having signal processing capabilities. The Processor 110 may also be a general-purpose Processor, for example, a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a discrete gate or transistor logic device, or a discrete hardware component, which may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present Application. Further, a general purpose processor may be a microprocessor or any conventional processor or the like.
The Memory 120 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), and an electrically Erasable Programmable Read-Only Memory (EEPROM). The memory 120 is used for storing a program, and the processor 110 executes the program after receiving the execution instruction.
It should be noted that the structure shown in fig. 2 is only an illustration, and the electronic device 100 provided in the embodiment of the present application may also have fewer or more components than those shown in fig. 2, or have a different configuration than that shown in fig. 2. Further, the components shown in fig. 2 may be implemented by software, hardware, or a combination thereof.
Referring to fig. 3, fig. 3 is a flowchart illustrating steps of a method for predicting a profile according to an embodiment of the present disclosure, where the method is applied to the electronic device 100 shown in fig. 2. It should be noted that, the method for predicting a side appearance provided in the embodiment of the present application is not limited by the sequence shown in fig. 3 and the following, and the method includes: step S101-step S103.
Step S101: acquiring a pre-correction feature image of a target patient; the pre-correction feature image comprises a position offset relation among a labial curve of the first incisor before correction, an adjusted labial curve of the first incisor and a first labial contour curve before correction.
Step S102: inputting the characteristic image before correction into a prediction model to obtain a predicted characteristic image after correction; the corrected characteristic image comprises a position offset relation between a labial curve of the first incisor and a corrected contour curve of the first lip.
Step S103: and obtaining a predicted corrected skull side appearance image based on the predicted corrected characteristic image.
The prediction model is generated by training in advance by training sample data including the pre-correction feature image and the post-correction feature image. The pre-correction feature image comprises a position offset relationship among a labial curve of the first incisor before correction, an adjusted labial curve of the first incisor and a first labial contour curve before correction, and the post-correction feature image comprises a position offset relationship among a labial curve of the first incisor after correction and a first labial contour curve after correction. Therefore, the model is trained through the pre-correction feature image and the post-correction feature image, the model can pay more attention to the influence of the position movement amount of the incisor on the lip contour, the position of the corrected lip contour curve can be accurately predicted by the prediction model according to the position movement amount of the incisor, and compared with the prediction in the prior art through the mode of soft and hard tissue average values, the prediction accuracy and reliability are higher.
For the convenience of understanding the present solution, a method for generating a prediction model provided in the embodiments of the present application is described first. Referring to fig. 4, fig. 4 is a flowchart illustrating steps of a method for predicting a profile according to an embodiment of the present disclosure, where the method can also be applied to the electronic device 100 shown in fig. 2. It should be noted that the generation method of the prediction model provided in the embodiment of the present application is not limited by the order shown in fig. 4 and the following, and the method includes: step S201-step S202.
Step S201: acquiring training sample data; the training sample data comprises a characteristic image before correction and a characteristic image after correction; the pre-correction feature image comprises a position offset relation among a labial curve of the first incisor before correction, an adjusted labial curve of the first incisor and a first labial contour curve before correction; the corrected characteristic image comprises a position offset relation between a labial curve of the corrected first incisor and a corrected first lip contour curve.
Step S202: and training the initial model based on the training sample data to obtain a prediction model.
The following is a description of a specific process for acquiring the training sample data.
Referring to fig. 5, in the embodiment of the present application, a specific process of acquiring training sample data includes: step S301 to step S304.
Step S301: acquiring an image of a patient's pre-correction skull; the image of the skull before correction comprises a marked labial curve of the first incisor before correction, an adjusted labial curve of the first incisor and a first labial contour curve before correction.
The first incisor may be an upper incisor or a lower incisor of the patient.
In the embodiment of the application, the image of the corrected skull of the patient is obtained by a measuring device. Referring to fig. 6, the measuring apparatus includes: a toolbar and an operation interface.
Tool bars are arranged above and below the operation interface, wherein marking tools (such as arrow symbols in the tool bars) are arranged on the tool bars above. The user can mark on the operation interface through the marking tool. Optionally, a save key, a lock key, a restore key, and the like may be provided on the upper toolbar. The lower toolbar can be used for adjusting the scale of the view, such as zooming the view of the operation interface, and is also used for adjusting the transparency of the operation interface.
The operation interface is mainly used for displaying the imported image, and the measuring device can process the image imported in the operation interface according to a pre-formulated algorithm. Such as image recognition algorithms, size detection algorithms, etc., and the present application is not limited thereto.
The specific implementation form of the measuring device can be application software, an applet, a webpage and the like.
After an initial image of the skull scan of the patient before correction is acquired, the initial image may be imported into the measurement device described above. The measuring device can identify the contour of a first incisor before correction and a first lip contour curve before correction in an initial image before correction through an image identification algorithm; then, the electronic device on which the measuring apparatus is mounted outlines the adjusted first incisor based on a marking operation by the user. After the measurement device identifies and outlines, an initial image displayed on an operation interface in the measurement device is shown in fig. 7.
The initial image shown in fig. 7 contains the pre-corrective and post-adjustment contours of the upper and lower incisors. Wherein the thicker profile is the profile before incisor correction and the thinner profile is the adjusted profile. It should be noted that the user is usually a doctor or a medical staff. The doctor determines the adjusted incisor positions based on experience and the patient's corrective program and then marks them in the initial image. The rightmost curve of the indicia of the initial image shown in fig. 7 is the lip contour curve, including the upper lip contour curve and the lower lip contour curve. When the first incisor is an upper incisor, the first lip contour curve is an upper lip contour curve; when the first incisor is a lower incisor, the first lip contour curve is a lower lip contour curve.
Efficiency can be improved by the measuring device without manually marking all the contours and curves by a user.
It will be appreciated that the measuring device may also be free of automatic identification and that the profile of the first incisor prior to correction, the profile of the first incisor after adjustment, and the first lip profile curve prior to correction may all be marked by the user. That is, the electronic device on which the measuring apparatus is mounted delineates the contour of the first incisor before correction, the contour of the adjusted first incisor, and the first lip contour curve before correction based on the marking operation of the user.
After the measurement device identifies and delineates, extracting a labial curve in the contour of the first incisor before correction and a labial curve in the contour of the first incisor after adjustment in fig. 7, and generating a skull image before correction by combining the first labial contour curve before correction. The image of the cranium before correction can be referred to fig. 8. The curve extraction of the upper and lower incisors is shown in fig. 8.
It should be noted that the labial curve is a curve on the side close to the lips in the contour.
Wherein a1 represents the labial curve of the upper incisors before correction; a2 represents the labial curve of the adjusted upper incisors, and A3 represents the contour curve of the upper lip before correction. B1 represents the labial curve of the lower incisors before correction; b2 shows the labial curve of the adjusted lower incisors, and B3 shows the lower lip contour curve before correction.
When the first incisor is the upper incisor, three curves A1, A2 and A3 are extracted to generate an image of the corrected skull; when the first incisor is the inferior incisor, three curves B1, B2, and B3 are extracted to generate an image of the corrected skull.
In other embodiments, the extracted three curves may also be directly combined with the initial image to generate the pre-correction cranial image. Accordingly, the initial image may be marked by any other mapping software to generate the image of the corrected skull. The present application is not limited to this.
Step S302: and generating a pre-correction feature image based on the pre-correction skull image.
After acquiring the image of the corrected skull, performing feature processing on the image of the corrected skull to generate a feature image before correction. In the embodiment of the present application, step S302 specifically includes generating six offset matrices based on the pre-correction skull image; generating six corresponding gray level images based on the six offset matrixes; and superposing the six gray level images to obtain a characteristic image before correction.
The six deviation matrixes respectively correspond to X-axis deviation and Y-axis deviation between every two three curves; the three curves are a labial curve of the first incisor before correction, a labial curve of the first incisor after adjustment and a first labial contour curve before correction.
That is, in the embodiment of the present application, an X-axis offset and a Y-axis offset between curves are determined based on curves in a skull image, so as to obtain an offset matrix corresponding to the offset, then the offset matrix is converted into a grayscale image, and finally the grayscale images are superimposed, so that a feature image can be obtained. Through the processing mode, the generated characteristic image contains accurate position offset relation among all curves, and the mode has simple processing process and higher processing efficiency.
Because the length and the shape of each curve are different, the offset is difficult to calculate by directly using the curve position, so that the offset between the curves can be converted into the offset between coordinate points on the curves in order to obtain the accurate offset between the curves. Specifically, the generating six offset matrices based on the image of the corrected skull includes: uniformly sampling three curves, namely a labial curve of the first incisor before correction, an adjusted labial curve of the first incisor and a first labial contour curve before correction; acquiring coordinates of N points of a labial curve of the first incisor before correction, coordinates of N points of the labial curve of the first incisor after adjustment and coordinates of N points of a first labial contour curve before correction; generating six offset matrices based on the coordinates of the N points of the labial curve of the first incisor before correction, the coordinates of the N points of the labial curve of the first incisor after adjustment, and the coordinates of the N points of the first labial contour curve before correction.
The number of samples is N, and N is a positive integer. The number of N may be 128, 256, etc., and the present application does not limit the number.
With continued reference to FIG. 8, the upper incisors are used as the first incisors, respectively
Figure F_210616164155250_250838007
Figure F_210616164155377_377785008
Figure F_210616164155516_516425009
Represents the k-th point, 0, on the curves A1, A2, A3<k<N + 1. Calculate the curveAny point on a1, an offset on the X-axis and an offset on the Y-axis relative to each point on the curve a 2. The offset of the X axis is the difference of the X axis coordinates of the two points, and the offset of the Y axis is the difference of the Y axis coordinates of the two points.
Using an NxN two-dimensional matrix
Figure F_210616164155625_625787010
To save the offset on the X-axis of each point on curve a1 relative to each point on curve a2 for a two-dimensional matrix
Figure F_210616164155719_719736011
The calculation formula of each matrix cell in (1) is:
Figure F_210616164155844_844579012
(1)
in the formula (1)
Figure F_210616164156032_032231013
An offset representing the ith row and the jth column in the matrix;
Figure F_210616164156158_158051014
is the X-axis coordinate of the ith point on curve a1,
Figure F_210616164156250_250820015
and the X-axis coordinate of the j-th point on the curve a 2.
Accordingly, an N × N two-dimensional matrix can be used
Figure F_210616164156344_344643016
To save the offset in the Y-axis of each point on curve a1 relative to each point on curve a2 for a two-dimensional matrix
Figure F_210616164156438_438298017
The calculation formula of each matrix cell in (1) is:
Figure F_210616164156532_532065018
(2)
in the formula (2)
Figure F_210616164156625_625854019
An offset representing the ith row and the jth column in the matrix;
Figure F_210616164156719_719606020
is the Y-axis coordinate of the i-th point on curve a1,
Figure F_210616164156828_828970021
and the Y-axis coordinate of the j-th point on the curve a 2.
By analogy, using an NxN two-dimensional matrix
Figure F_210616164156938_938510022
To preserve the offset on the X-axis of each point on curve a1 relative to each point on curve A3; using an NxN two-dimensional matrix
Figure F_210616164157032_032050023
To preserve the offset in the Y-axis of each point on curve a1 relative to each point on curve A3.
By analogy, using an NxN two-dimensional matrix
Figure F_210616164157125_125848024
To preserve the offset on the X-axis of each point on curve a2 relative to each point on curve A3; using an NxN two-dimensional matrix
Figure F_210616164157219_219556025
To preserve the offset in the Y-axis of each point on curve a2 relative to each point on curve A3.
By the method, six offset matrixes corresponding to the three curves of the upper incisor can be obtained. The data of the offset matrix can be represented by the diagram of fig. 9. For example, the left-most graph of FIG. 9 represents an offset matrix
Figure F_210616164157328_328923026
Line 128 of the diagram. The middle graph of FIG. 9 represents the offset matrix
Figure F_210616164157453_453909027
Line 128 of the diagram. The rightmost graph in FIG. 9 represents the offset matrix
Figure F_210616164157578_578905028
Line 128 of the diagram. In the three graphs in fig. 9, the abscissa indicates the number of data points, and the ordinate indicates the offset amount.
It will be appreciated that the six offset matrices corresponding to the three curves for the lower incisors are obtained in the same manner as described above. E.g. separately for
Figure F_210616164157672_672661029
Figure F_210616164157767_767877030
Figure F_210616164157860_860172031
Denotes the k-th point, 0, on the curves B1, B2, B3<k<N + 1. For any point on the curve B1, the X-axis offset and the Y-axis offset are calculated with respect to each point on the curve B2. The offset of the X axis is the difference of the X axis coordinates of the two points, and the offset of the Y axis is the difference of the Y axis coordinates of the two points.
Using an NxN two-dimensional matrix
Figure F_210616164157957_957324032
To save the offset on the X-axis of each point on curve B1 relative to each point on curve B2 for a two-dimensional matrix
Figure F_210616164158047_047679033
The calculation formula of each matrix cell in (1) is:
Figure F_210616164158141_141423034
(3)
in the formula (3)
Figure F_210616164158250_250855035
An offset representing the ith row and the jth column in the matrix;
Figure F_210616164158344_344618036
is the X-axis coordinate of the ith point on the curve B1,
Figure F_210616164158457_457394037
and the X-axis coordinate of the j-th point on the curve B2.
By analogy, using an NxN two-dimensional matrix
Figure F_210616164158578_578909038
To preserve the offset in the Y-axis of each point on curve B1 relative to each point on curve B2. Using an NxN two-dimensional matrix
Figure F_210616164158688_688478039
To preserve the offset on the X-axis of each point on curve B1 relative to each point on curve B3; using an NxN two-dimensional matrix
Figure F_210616164158797_797625040
To preserve the offset in the Y-axis of each point on curve B1 relative to each point on curve B3. Using an NxN two-dimensional matrix
Figure F_210616164158907_907050041
To preserve the offset on the X-axis of each point on curve B2 relative to each point on curve B3; using an NxN two-dimensional matrix
Figure F_210616164159047_047667042
To preserve the offset in the Y-axis of each point on curve B2 relative to each point on curve B3.
By the method, six offset matrixes corresponding to the three curves of the lower incisor can be obtained. The data of the offset matrix can be represented by the diagram of fig. 10. For example, the left-most graph of FIG. 10 represents an offset matrix
Figure F_210616164159188_188349043
Line 128 of the diagram. The middle graph of FIG. 10 represents the offset matrix
Figure F_210616164159313_313350044
Line 128 of the diagram. The rightmost graph in FIG. 10 represents the offset matrix
Figure F_210616164159407_407085045
Line 128 of the diagram. In the three graphs in fig. 10, the abscissa indicates the number of data points, and the ordinate indicates the offset amount.
In other embodiments, the offset matrix may also be generated with only the coordinates of the two end points of the curve, or with only the coordinates of the two end points and the midpoint of the curve. The present application is not limited thereto.
In the embodiment of the present application, the step of generating the corresponding grayscale image based on the offset matrix includes: dividing each value in the offset matrix by a preset maximum offset to obtain a conversion value; wherein, the interval of the conversion value is [0, 1 ]; multiplying the conversion numerical value by the maximum gray value to obtain a gray value corresponding to each conversion numerical value; and mapping each gray value in the offset matrix to obtain a gray image corresponding to the offset matrix.
The preset maximum offset may correspond to coordinates of 1, 2, etc., and the maximum gray-level value of 255.
That is, after each offset in the offset matrix is converted into a gray value, each gray value in the offset matrix is mapped, and each pixel unit in the image and each array unit in the offset matrix can be in one-to-one correspondence in the obtained gray image. The generated grayscale image can be referred to fig. 11. Fig. 11 shows a grayscale image generated by converting three different offset matrices.
For example, assuming that the value of one of the matrix cells is 0.4, when the preset maximum offset is 1, the conversion value =0.4/1= 0.4. Then, the conversion value 0.4 is multiplied by the maximum grayscale value 255 to obtain a grayscale value of 0.4 × 255=102 corresponding to the conversion value.
When the first incisor is the upper incisor, by six offset matrices
Figure F_210616164159516_516427046
Figure F_210616164159657_657053047
Figure F_210616164159767_767862048
Figure F_210616164159891_891401049
Figure F_210616164200016_016400050
And
Figure F_210616164200125_125831051
six gray images can be obtained
Figure F_210616164200282_282082052
Figure F_210616164200407_407569053
Figure F_210616164200532_532079054
Figure F_210616164200641_641514055
Figure F_210616164200768_768774056
And
Figure F_210616164200907_907079057
when the first incisor is the lower incisor, by six offset matrices
Figure F_210616164201047_047693058
Figure F_210616164201172_172632059
Figure F_210616164201282_282060060
Figure F_210616164201391_391864061
Figure F_210616164201469_469559062
And
Figure F_210616164201578_578879063
six gray images can be obtained
Figure F_210616164201688_688343064
Figure F_210616164201782_782330065
Figure F_210616164201891_891421066
Figure F_210616164202000_000850067
Figure F_210616164202094_094544068
And
Figure F_210616164202203_203887069
and finally, overlapping the six gray level images corresponding to the first incisor to obtain a pre-correction feature image corresponding to the first incisor, wherein the size of the pre-correction feature image is NxNx6.
When the first incisor is the upper incisor, six gray level images are obtained
Figure F_210616164202297_297707070
Figure F_210616164202407_407585071
Figure F_210616164202516_516455072
Figure F_210616164202625_625802073
Figure F_210616164202750_750792074
And
Figure F_210616164202844_844601075
and superposing to obtain a pre-correction feature image IA with the size of NxNx6.
When the first incisor is the lower incisor, six gray level images are obtained
Figure F_210616164202938_938329076
Figure F_210616164203016_016456077
Figure F_210616164203125_125840078
Figure F_210616164203235_235123079
Figure F_210616164203345_345706080
And
Figure F_210616164203453_453908081
overlap the layersThus obtaining a pre-correction characteristic image IB with the size of NxNx6.
Step S303: acquiring a corrected skull image of a patient; the corrected skull image comprises a marked lip side curve of the corrected first incisor and a corrected first lip contour curve.
It should be noted that the principle of obtaining the corrected skull image of the patient is the same as that of obtaining the corrected skull image of the patient. Except that the curves marked in the image are different.
For example, images of the cranium after correction can still be obtained by the measuring device shown in fig. 6. That is, after an initial image of the skull scanned after the correction of the patient is acquired, the initial image may be imported into the measurement device described above. Identifying the contour of the corrected first incisor and the corrected first lip contour curve in the corrected initial image through electronic equipment loaded with the measuring device; after the recognition is completed, the initial image displayed by the measuring device is as shown in fig. 12. The contour of the upper and lower incisors and the upper and lower lip contour curves are shown in fig. 12. When the first incisor is an upper incisor, the first lip contour curve is an upper lip contour curve; when the first incisor is a lower incisor, the first lip contour curve is a lower lip contour curve.
Then, when the measurement device identifies, the electronic device extracts the labial curve in the contour of the corrected first incisor in fig. 12, and the corrected skull image can refer to fig. 13 in combination with the corrected contour curve of the first lip.
Wherein, fig. 13 shows the extraction of the curves of the upper incisor and the lower incisor, and a4 shows the labial curve of the upper incisor after the correction; a5 denotes the upper lip contour curve after correction; b4 represents the labial curve of the corrected lower incisors; b5 shows the lower lip contour curve after correction.
When the first incisor is the upper incisor, extracting two curves A4 and A5 to generate a corrected skull image; when the first incisor is the lower incisor, two curves B4 and B5 are extracted to generate a corrected skull image.
Step S304: and generating a corrected characteristic image based on the corrected skull image.
Correspondingly, the principle of generating the corrected characteristic image based on the corrected skull image is the same as the principle of generating the pre-corrected characteristic image based on the pre-corrected skull image.
That is, generating the post-correction feature image based on the post-correction skull image may include: generating two offset matrixes based on the corrected skull image; the two offset matrixes respectively correspond to the X-axis offset and the Y-axis offset between the two curves; the two curves are a labial curve of the first incisor after correction and a first labial contour curve after correction; generating two corresponding gray level images based on the two offset matrixes; and superposing the two gray level images to obtain the corrected characteristic image.
Accordingly, generating two offset matrices based on the corrected cranium image may include: uniformly sampling the labial curve of the corrected first incisor and the corrected first labial contour curve; wherein the number of samples is M, and M is a positive integer; acquiring coordinates of M points of a labial curve of the corrected first incisor and coordinates of M points of a contour curve of the corrected first labial; and generating two offset matrixes based on the coordinates of the M points of the labial curve of the first incisor after correction and the coordinates of the M points of the first labial contour curve after correction.
In the present embodiment, the value of M is the same as that of N in the previous embodiments. Of course, the value of M may be a value smaller than N or a value larger than N, and the present application is not limited thereto.
Accordingly, the step of generating the corresponding grayscale image based on the offset matrix includes: dividing each value in the offset matrix by a preset maximum offset to obtain a conversion value; wherein, the interval of the conversion value is [0, 1 ]; multiplying the conversion numerical value by the maximum gray value to obtain a gray value corresponding to each conversion numerical value; and mapping each gray value in the offset matrix to obtain a gray image corresponding to the offset matrix.
The generated grayscale image can be referred to fig. 14. Fig. 14 shows a grayscale image generated by converting two different shift matrices.
And finally, overlapping the two gray level images corresponding to the first incisor to obtain a corrected characteristic image corresponding to the first incisor, wherein the size of the corrected characteristic image is NxNx2.
When the first incisor is the upper incisor, the two gray scale images are superimposed to obtain a pre-correction feature image OA with a size of M × 2.
When the first incisor is the lower incisor, the two gray level images are superposed to obtain a pre-correction feature image OB with the size of M × 2.
It should be noted that the step implementation principle of step S303 is the same as that of step S301, and the same parts may be referred to each other. The step implementation principle of step S304 is the same as that of step S302, and the same parts may be referred to each other, and the present application is not limited thereto.
After the pre-correction feature image and the post-correction feature image are obtained, the pre-correction feature image and the post-correction feature image are used as a group of training sample data to train the initial model.
It should be noted that, in the embodiment of the present application, two prediction models are constructed for the upper incisors and the lower incisors. For example, when training a predictive model corresponding to an upper incisor, the training is performed using IA and OA as a set of training data, and when training a predictive model corresponding to a lower incisor, the training is performed using IB and OB as a set of training data.
The structure of the prediction model is an encoder-decoder structure, which comprises an encoder and two decoders. The prediction of the upper and lower lip contours may share one encoder, while separate decoders are used. In training, the encoder is trained first, and then the decoder is trained. The loss function of the prediction model adopts an L2 loss function.
The above model structure is only an example, and is not a limitation on the structure of the prediction model of the present application.
After the training is completed and the prediction model is generated, the prediction model can be used in the actual side-scene prediction to perform the method as the steps S101 to S103.
A pre-correction feature image of a target patient is first acquired. The manner of obtaining may also refer to the specific description in step S301.
And inputting the acquired characteristic image before correction of the target patient into the prediction model to obtain a predicted characteristic image after correction. The corrected characteristic image comprises the position offset relation between the labial curve of the corrected first incisor and the corrected first lip contour curve. Therefore, the coordinates of the corrected first lip profile curve are obtained based on the positional deviation relationship.
As an embodiment, the gray value of each pixel unit in the predicted corrected feature image may be divided by the maximum gray value, and then multiplied by the preset maximum offset, so as to be converted into the offset matrix. And obtaining the coordinates of each point of the first lip contour curve through the offset between two points in the offset matrix, and further obtaining the position of the first lip contour curve.
In actual application, the prediction models corresponding to the upper incisors and the lower incisors may be applied simultaneously. It is also possible to predict only the upper lip contour corresponding to the upper incisors or only the lower lip contour corresponding to the lower incisors. The present application is not limited thereto.
Referring to fig. 15, based on the same inventive concept, an apparatus 400 for generating a prediction model is further provided in the embodiment of the present application.
The device includes: a first obtaining module 401, configured to obtain training sample data; the training sample data comprises a characteristic image before correction and a characteristic image after correction; the pre-correction feature image comprises a position offset relation among a labial curve of the first incisor before correction, an adjusted labial curve of the first incisor and a first labial contour curve before correction; the corrected characteristic image comprises a position offset relation between a labial curve of the first incisor and a corrected contour curve of the first lip.
A generating module 402, configured to train an initial model based on the training sample data to obtain the prediction model.
Referring to fig. 16, based on the same inventive concept, an embodiment of the present application further provides a side-view prediction apparatus 500.
The device includes: a second obtaining module 501, configured to obtain a pre-correction feature image of a target patient; the pre-correction feature image comprises a position offset relation among a labial curve of the first incisor before correction, an adjusted labial curve of the first incisor and a first labial contour curve before correction.
A prediction module 502, configured to input the pre-correction feature image into a prediction model obtained by the generation method of the prediction model according to the first embodiment, so as to obtain a predicted post-correction feature image; the corrected characteristic image comprises a position offset relation between a labial curve of the first incisor and a corrected contour curve of the first lip.
A conversion module 503, configured to obtain the predicted corrected cranial stereogram image based on the predicted corrected feature image.
It should be noted that, as those skilled in the art can clearly understand, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Based on the same inventive concept, embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed, the computer program performs the methods provided in the above embodiments.
The storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (8)

1. A method for generating a prediction model, wherein the prediction model is used for predicting a feature image before correction, and the method comprises the following steps:
acquiring training sample data; the training sample data comprises a characteristic image before correction and a characteristic image after correction; the pre-correction feature image comprises a position offset relation among a labial curve of the first incisor before correction, an adjusted labial curve of the first incisor and a first labial contour curve before correction; the corrected characteristic image comprises a position offset relation between a labial curve of the corrected first incisor and a corrected first labial contour curve;
training an initial model based on the training sample data to obtain the prediction model;
wherein, the acquiring training sample data comprises: acquiring an image of a patient's pre-correction skull; wherein the pre-correction skull image comprises marked labial curves of the first incisor before correction, labial curves of the first incisor after adjustment, and a first labial contour curve before correction; generating the pre-correction feature image based on the pre-correction skull image; acquiring an image of the corrected skull of the patient; wherein the corrected skull image comprises a marked labial curve of the corrected first incisor and a marked corrected first lip contour curve; generating the corrected characteristic image based on the corrected skull image;
generating the pre-correction feature image based on the pre-correction skull image comprises: generating six offset matrixes based on the image of the head before correction; the six deviation matrixes respectively correspond to X-axis deviation and Y-axis deviation between every two three curves; the three curves are a labial curve of the first incisor before correction, a labial curve of the first incisor after adjustment and a first labial contour curve before correction; generating six corresponding gray-scale images based on the six offset matrixes; superposing the six gray level images to obtain the characteristic image before correction;
generating the corrected feature image based on the corrected skull image, including: generating two offset matrixes based on the corrected skull image; the two offset matrixes respectively correspond to the X-axis offset and the Y-axis offset between the two curves; the two curves are a labial curve of the corrected first incisor and a contour curve of the corrected first lip; generating two corresponding gray-scale images based on the two offset matrixes; and superposing the two gray level images to obtain the corrected characteristic image.
2. The method of claim 1, wherein generating six offset matrices based on the pre-correction cranial image comprises:
uniformly sampling three curves, namely a labial curve of the first incisor before correction, a labial curve of the first incisor after adjustment and a first labial contour curve before correction; the number of the samples is N, and N is a positive integer;
acquiring coordinates of N points of the labial curve of the first incisor before correction, the coordinates of N points of the labial curve of the first incisor after adjustment and the coordinates of N points of the first labial contour curve before correction;
generating six offset matrices based on the coordinates of the N points of the labial curve of the first incisor before correction, the coordinates of the N points of the labial curve of the first incisor after adjustment, and the coordinates of the N points of the first labial contour curve before correction;
correspondingly, the generating two offset matrixes based on the corrected skull image comprises:
uniformly sampling the labial curve of the corrected first incisor and the corrected first labial contour curve; wherein the number of samples is M, and M is a positive integer;
acquiring coordinates of M points of the labial curve of the corrected first incisor and coordinates of M points of the corrected first labial contour curve;
and generating the two offset matrixes based on the coordinates of the M points of the labial curve of the first incisor after correction and the coordinates of the M points of the first labial contour curve after correction.
3. The method of claim 1, wherein generating the corresponding grayscale image based on the shift matrix comprises:
dividing each value in the offset matrix by a preset maximum offset to obtain a conversion value; wherein the interval of the conversion value is [0, 1 ];
multiplying the conversion numerical value by the maximum gray value to obtain a gray value corresponding to each conversion numerical value;
and mapping each gray value in the offset matrix to obtain a gray image corresponding to the offset matrix.
4. The method of claim 1, wherein the obtaining of the pre-correction cranial image of the patient comprises:
acquiring an initial image of the skull scan of the patient before correction;
identifying a contour of a first incisor before correction and a first lip contour curve before correction in the initial image before correction;
outlining the adjusted first incisor based on the marking operation of the user;
extracting a labial curve in the contour of the first incisor before correction and a labial curve in the contour of the first incisor after adjustment, and generating the image of the skull before correction by combining the first labial contour curve before correction;
accordingly, the acquiring of the corrected skull image of the patient comprises:
acquiring an initial image of the skull scanned by a patient after correction;
identifying a contour of the corrected first incisor and the corrected first lip contour curve in the corrected initial image;
and extracting a lip side curve in the profile of the corrected first incisor, and generating the corrected skull image by combining the corrected first lip profile curve.
5. The method of claim 1, wherein the first incisor is an upper or lower incisor of the patient.
6. A method for profile prediction, comprising:
acquiring a pre-correction feature image of a target patient; the pre-correction feature image comprises a position offset relation among a labial curve of a first incisor before correction, an adjusted labial curve of the first incisor and a first labial contour curve before correction;
inputting the pre-correction feature image into a prediction model obtained by the generation method of the prediction model according to claim 1 to obtain a predicted post-correction feature image; the corrected characteristic image comprises a position offset relation between a labial curve of the first incisor and a corrected contour curve of the first lip;
and obtaining the predicted corrected skull side appearance image based on the predicted corrected characteristic image.
7. An apparatus for generating a prediction model, wherein the prediction model is used for predicting a feature image before correction, the apparatus comprising:
the first acquisition module is used for acquiring training sample data; the training sample data comprises a characteristic image before correction and a characteristic image after correction; the pre-correction feature image comprises a position offset relation among a labial curve of the first incisor before correction, an adjusted labial curve of the first incisor and a first labial contour curve before correction; the corrected characteristic image comprises a position offset relation between a labial curve of the corrected first incisor and a corrected first labial contour curve;
the generating module is used for training an initial model based on the training sample data to obtain the prediction model;
the first acquisition module is specifically used for acquiring an image of a skull of a patient before correction; wherein the pre-correction skull image comprises marked labial curves of the first incisor before correction, labial curves of the first incisor after adjustment, and a first labial contour curve before correction; generating the pre-correction feature image based on the pre-correction skull image; acquiring an image of the corrected skull of the patient; wherein the corrected skull image comprises a marked labial curve of the corrected first incisor and a marked corrected first lip contour curve; generating the corrected characteristic image based on the corrected skull image;
the first acquisition module is specifically configured to generate six offset matrices based on the pre-correction skull image; the six deviation matrixes respectively correspond to X-axis deviation and Y-axis deviation between every two three curves; the three curves are a labial curve of the first incisor before correction, a labial curve of the first incisor after adjustment and a first labial contour curve before correction; generating six corresponding gray-scale images based on the six offset matrixes; superposing the six gray level images to obtain the characteristic image before correction;
the first acquisition module is specifically used for generating two offset matrixes based on the corrected skull image; the two offset matrixes respectively correspond to the X-axis offset and the Y-axis offset between the two curves; the two curves are a labial curve of the corrected first incisor and a contour curve of the corrected first lip; generating two corresponding gray-scale images based on the two offset matrixes; and superposing the two gray level images to obtain the corrected characteristic image.
8. An electronic device, comprising: a processor and a memory, the processor and the memory connected;
the memory is used for storing programs;
the processor is configured to run a program stored in the memory, to perform the method of any of claims 1-5, and/or to perform the method of claim 6.
CN202110683147.5A 2021-06-21 2021-06-21 Generation method and device of prediction model, side appearance prediction method and electronic equipment Active CN113256488B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110683147.5A CN113256488B (en) 2021-06-21 2021-06-21 Generation method and device of prediction model, side appearance prediction method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110683147.5A CN113256488B (en) 2021-06-21 2021-06-21 Generation method and device of prediction model, side appearance prediction method and electronic equipment

Publications (2)

Publication Number Publication Date
CN113256488A CN113256488A (en) 2021-08-13
CN113256488B true CN113256488B (en) 2021-09-24

Family

ID=77188810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110683147.5A Active CN113256488B (en) 2021-06-21 2021-06-21 Generation method and device of prediction model, side appearance prediction method and electronic equipment

Country Status (1)

Country Link
CN (1) CN113256488B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049350B (en) * 2021-12-15 2023-04-07 四川大学 Generation method, prediction method and device of alveolar bone contour prediction model

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108320325A (en) * 2018-01-04 2018-07-24 华夏天宇(北京)科技发展有限公司 The generation method and device of dental arch model

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10342638B2 (en) * 2007-06-08 2019-07-09 Align Technology, Inc. Treatment planning and progress tracking systems and methods
US9675305B2 (en) * 2014-06-03 2017-06-13 Ortho-Tain System and method for determining an orthodontic diagnostic analysis of a patient
CN107679449B (en) * 2017-08-17 2018-08-03 平安科技(深圳)有限公司 Lip motion method for catching, device and storage medium
WO2020005386A1 (en) * 2018-06-29 2020-01-02 Align Technology, Inc. Providing a simulated outcome of dental treatment on a patient
EP3620130A1 (en) * 2018-09-04 2020-03-11 Promaton Holding B.V. Automated orthodontic treatment planning using deep learning
CN110246580B (en) * 2019-06-21 2021-10-15 上海优医基医疗影像设备有限公司 Cranial image analysis method and system based on neural network and random forest
CN111460899B (en) * 2020-03-04 2023-06-09 达理 Soft and hard tissue characteristic topology identification and facial deformation prediction method based on deep learning
CN111557753B (en) * 2020-05-07 2021-04-23 四川大学 Method and device for determining target position of orthodontic incisor

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108320325A (en) * 2018-01-04 2018-07-24 华夏天宇(北京)科技发展有限公司 The generation method and device of dental arch model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Accuracy of computer-aided prediction in soft tissue changes after orthodontic treatment";Xu Zhang等;《American Journal of Orthodontics and Dentofacial Orthopedics》;20191126;第156卷(第6期);第823-831页 *
"Orthodontic incisor retraction caused changes in the soft tissue chin area: a retrospective study";Wenxin Lu等;《BMC Oral Health》;20200415;第20卷(第108期);第1-7页 *

Also Published As

Publication number Publication date
CN113256488A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
Guyomarc'h et al. Anthropological facial approximation in three dimensions (AFA 3D): Computer‐assisted estimation of the facial morphology using geometric morphometrics
US7970628B2 (en) Method and system for providing dynamic orthodontic assessment and treatment profiles
US20200286223A1 (en) Method of analyzing dental image for correction diagnosis and apparatus using the same
WO2019141106A1 (en) C/s architecture-based dental beautification ar smart assistance method and apparatus
JP6607364B2 (en) Prediction system
US20230068041A1 (en) Method and apparatus for generating orthodontic teeth arrangement shape
CN113256488B (en) Generation method and device of prediction model, side appearance prediction method and electronic equipment
Yuan et al. Tooth segmentation and gingival tissue deformation framework for 3D orthodontic treatment planning and evaluating
CN113554607A (en) Tooth body detection model, generation method and tooth body segmentation method
JP2020526302A (en) An instrument that uses a 3D scan to track the gingival line and display periodontal measurements
CN111481208B (en) Auxiliary system, method and storage medium applied to joint rehabilitation
CN113345069A (en) Modeling method, device and system of three-dimensional human body model and storage medium
KR102523821B1 (en) Method, server and computer program for providing surgical simulations
CN115268531A (en) Water flow temperature regulation control method, device, equipment and storage medium for intelligent bathtub
CN114283219A (en) Method, device, equipment and medium for generating simulated postoperative CBCT (cone beam computed tomography) image
CN112837812A (en) Intelligent re-diagnosis method for orthodontics and related device
CN111466933A (en) Spine mobility measuring method and system
CN114049350B (en) Generation method, prediction method and device of alveolar bone contour prediction model
CN113270172B (en) Method and system for constructing contour lines in skull lateral position slice
JP7405809B2 (en) Estimation device, estimation method, and estimation program
CN113344993B (en) Side appearance simulation method
KR102377629B1 (en) Artificial Intelligence Deep learning-based orthodontic diagnostic device for analyzing patient image and generating orthodontic diagnostic report and operation method thereof
KR102472416B1 (en) Method, server and computer program for performing face transformation simulation
CN107564072B (en) Pulse Doppler image processing method and device
CN114663345B (en) Fixed point measurement method, fixed point measurement device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant