CN115530855A - Control method and device of three-dimensional data acquisition equipment and three-dimensional data acquisition equipment - Google Patents
Control method and device of three-dimensional data acquisition equipment and three-dimensional data acquisition equipment Download PDFInfo
- Publication number
- CN115530855A CN115530855A CN202211214603.2A CN202211214603A CN115530855A CN 115530855 A CN115530855 A CN 115530855A CN 202211214603 A CN202211214603 A CN 202211214603A CN 115530855 A CN115530855 A CN 115530855A
- Authority
- CN
- China
- Prior art keywords
- data
- limb
- dimensional
- target object
- posture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000013136 deep learning model Methods 0.000 claims abstract description 32
- 230000009471 action Effects 0.000 claims abstract description 16
- 230000036544 posture Effects 0.000 claims description 150
- 238000012549 training Methods 0.000 claims description 35
- 238000013499 data model Methods 0.000 claims description 26
- 230000001815 facial effect Effects 0.000 claims description 12
- 238000013507 mapping Methods 0.000 claims description 11
- 238000003062 neural network model Methods 0.000 claims description 8
- 238000012790 confirmation Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 15
- 238000013480 data collection Methods 0.000 abstract description 4
- 238000012545 processing Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 210000000214 mouth Anatomy 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000001519 tissue Anatomy 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 210000004195 gingiva Anatomy 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000002401 inhibitory effect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 210000004400 mucous membrane Anatomy 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 210000003296 saliva Anatomy 0.000 description 1
- 230000000576 supplementary effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000007306 turnover Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/54—Control of apparatus or devices for radiation diagnosis
- A61B6/545—Control of apparatus or devices for radiation diagnosis involving automatic set-up of acquisition parameters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0062—Arrangements for scanning
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1116—Determining posture transitions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/113—Recognition of static hand signs
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Veterinary Medicine (AREA)
- General Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Pathology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Evolutionary Computation (AREA)
- Public Health (AREA)
- Computing Systems (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Software Systems (AREA)
- Radiology & Medical Imaging (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Optics & Photonics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- High Energy & Nuclear Physics (AREA)
- Databases & Information Systems (AREA)
- Pulmonology (AREA)
- Physiology (AREA)
- Dentistry (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a control method and device of three-dimensional data acquisition equipment and the three-dimensional data acquisition equipment. Wherein, the method comprises the following steps: acquiring limb posture data of a first target object, wherein the limb posture data comprises: three-dimensional coordinate data and texture image data of the limb posture; inputting the limb posture data into a deep learning model for recognition to obtain a limb posture of a first target object; acquiring a control instruction corresponding to the limb posture, wherein the control instruction is used for controlling the three-dimensional data acquisition equipment; and controlling the three-dimensional data acquisition equipment to execute the action corresponding to the control instruction. The technical problem that in the process of utilizing each scanner to collect data, the scanner or the computer needs to be controlled in a contact mode, and great inconvenience is brought to data collection work is solved.
Description
Technical Field
The application relates to the field of medical instruments, in particular to a control method and device of three-dimensional data acquisition equipment and the three-dimensional data acquisition equipment.
Background
Three-dimensional data acquisition equipment for oral clinics generally comprises an intraoral scanner, a facial scanner, a Cone Beam CT (CBCT), an extraoral scanner and the like, wherein each scanner is matched with a set of related accessories and equipment for data acquisition.
At present, in the process of utilizing each scanner to carry out data acquisition, a contact type control scanner or a computer is needed, which brings great inconvenience to the data acquisition work.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the application provides a control method and device of three-dimensional data acquisition equipment and the three-dimensional data acquisition equipment, and aims to at least solve the technical problem that in the process of carrying out data acquisition by utilizing each scanner, the scanner or a computer needs to be controlled in a contact mode, so that great inconvenience is brought to data acquisition work.
According to an aspect of an embodiment of the present application, there is provided a control method of a three-dimensional data acquisition apparatus, including: acquiring limb posture data of a first target object, wherein the limb posture data comprises: three-dimensional coordinate data and texture image data of the limb posture; inputting the limb posture data into a deep learning model for recognition to obtain a limb posture of a first target object; acquiring a control instruction corresponding to the limb posture, wherein the control instruction is used for controlling the three-dimensional data acquisition equipment; and controlling the three-dimensional data acquisition equipment to execute the action corresponding to the control instruction.
Optionally, the deep learning model is generated by: obtaining a training data set, wherein the training data set comprises: three-dimensional coordinate data of a limb posture of the second target object, texture image data of the limb posture of the second target object, and the limb posture of the second target object; constructing a neural network model; training the neural network model based on the training data set to generate a deep learning model; and evaluating the generated deep learning model.
Optionally, obtaining a training data set comprises: acquiring three-dimensional coordinate data and texture image data of a plurality of types of limb gestures of a second target object, wherein the types comprise at least one of the following: skin tone, age group, gender, and occupation; and respectively acquiring three-dimensional coordinate data and texture image data of a plurality of limb postures of the plurality of types of second target objects.
Optionally, after the training data set is acquired, the method further includes: and respectively labeling the three-dimensional coordinate data and the texture image data of the plurality of limb postures of the second target object to obtain the mapping relation between the limb posture three-dimensional coordinate data and the texture image data of the second target object and the corresponding limb postures of the second target object.
Optionally, inputting the data of the limb posture into the deep learning model for recognition, to obtain the limb posture of the first target object, including: searching the limb posture corresponding to the limb posture data of the first target object from the mapping relation; and determining the searched limb posture as the limb posture of the first target object.
Optionally, before generating the deep learning model based on the training data set, the method further includes: and selecting the three-dimensional coordinate data and the texture image data of the target limb posture from the three-dimensional coordinate data and the texture image data of the plurality of limb postures, wherein the accuracy of the limb posture information identified from the three-dimensional coordinate data and the texture image data of the target limb posture is higher than a preset threshold value.
Optionally, acquiring the limb posture data of the first target object further includes: establishing a three-dimensional data model by using the three-dimensional coordinate data of the limb posture and the texture image data; the three-dimensional data model is determined as limb pose data of the first target object.
Optionally, determining the three-dimensional data model as limb pose data of the first target object comprises: under the condition that the number of the three-dimensional data models is multiple, identifying the multiple three-dimensional data models; and determining the recognized target three-dimensional data model as the limb posture data of the first target object.
Optionally, the acquisition device for acquiring the limb posture data of the first target object is a face scanner; the three-dimensional data acquisition device includes one or more of an intraoral scanner, a facial scanner, an intra-ear scanner, a dental cast scanner, a foot scanner, and a cone-beam CT machine.
Optionally, the obtaining of the control instruction corresponding to the limb posture includes obtaining at least one of the following control instructions: a start operation instruction for controlling the three-dimensional data acquisition equipment to start scanning data; the operation stopping instruction is used for controlling the three-dimensional data acquisition equipment to stop scanning data; a rotation instruction for controlling the rotation of the three-dimensional data acquisition device; the confirmation instruction is used for determining the action of the current instruction of the three-dimensional data acquisition equipment and controlling the three-dimensional data acquisition equipment to execute the next action; and switching instructions for switching the control object when the three-dimensional data acquisition device comprises a plurality of intraoral scanners, facial scanners, in-ear scanners, dental cast scanners, foot scanners and cone beam CT machines.
According to another aspect of the embodiments of the present application, there is also provided a control device for a three-dimensional data acquisition apparatus, including: a first obtaining module, configured to obtain limb posture data of a first target object, where the limb posture data includes: three-dimensional coordinate data and texture image data; the recognition module is used for inputting the limb posture data into the deep learning model for recognition to obtain the limb posture of the first target object; the second acquisition module is used for acquiring a control instruction corresponding to the limb posture, wherein the control instruction is used for controlling the three-dimensional data acquisition equipment; and the control module is used for controlling the three-dimensional data acquisition equipment to execute the action corresponding to the control instruction.
According to another aspect of the embodiments of the present application, there is also provided a three-dimensional data acquisition apparatus, including: collection system and treater, wherein, collection system is connected with the treater for gather target object's limbs gesture data, and with limbs gesture data transmission to treater, wherein, limbs gesture data include: three-dimensional coordinate data and texture image data of limb postures; and the processor is used for executing the control method of the three-dimensional data acquisition equipment.
According to another aspect of the embodiments of the present application, a nonvolatile storage medium is further provided, where the nonvolatile storage medium includes a stored program, and when the program runs, a device in which the nonvolatile storage medium is located is controlled to execute the above control method for a three-dimensional data acquisition device.
According to still another aspect of the embodiments of the present application, there is also provided a processor for executing a program stored in a memory, wherein the program executes the above control of the three-dimensional data acquisition apparatus.
In an embodiment of the present application, acquiring limb posture data of a first target object is adopted, where the limb posture data includes: three-dimensional coordinate data and texture image data of the limb posture; inputting the limb posture data into a deep learning model for recognition to obtain a limb posture of a first target object; acquiring a control instruction corresponding to the limb posture, wherein the control instruction is used for controlling the three-dimensional data acquisition equipment; the method for controlling the three-dimensional data acquisition equipment to execute the action corresponding to the control instruction identifies the body posture of the user by using the deep learning model, and then controls the three-dimensional data acquisition equipment by using the control instruction corresponding to the body posture of the user, thereby realizing the non-contact control of the three-dimensional data acquisition equipment, improving the efficiency of acquiring data by using the three-dimensional data acquisition equipment, realizing the technical effect required by sterile operation, and further solving the technical problem of great inconvenience brought to data acquisition work due to the need of contact control of scanners or computers in the process of acquiring data by using various scanners.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart of a control method of a three-dimensional data acquisition device according to an embodiment of the present application;
fig. 2 is a block diagram of a control device of a three-dimensional data acquisition apparatus according to an embodiment of the present application;
FIG. 3 is a block diagram of a three-dimensional data acquisition device according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a deep neural network according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some terms or terms appearing in the description of the embodiments of the present application are applicable to the following explanations:
intraoral scanner: the acquisition device of three-dimensional data of teeth and gingiva inside the oral cavity is usually provided with an intraoral scanner, also called an oral digital impression apparatus. The intraoral scanner can directly acquire the three-dimensional shape data of teeth or gum, is directly used for processing and repairing teeth to improve the treatment efficiency, and reduces the accumulated error caused by data conversion in the traditional processing flow process.
A face scanner: the appearance of the face often has a very big supplementary effect to the diagnosis and treatment of oral cavity, and the face scanner directly obtains the three-dimensional appearance data and the texture information of facial feature through the optical imaging principle.
Currently, means for obtaining dental model data in the field of dental diagnosis and treatment have gradually shifted from impression three-dimensional scanning to intraoral three-dimensional scanning techniques. The advent of this technology can be said to be another revolution in the digital processing of teeth. The technology abandons the mode of obtaining dental model data from impression, impression and three-dimensional scanning, and can directly obtain the three-dimensional data of teeth by entrance scanning. The method saves two steps of impression and turnover in process time, saves materials, labor cost and model express fee required by the processes in cost, and can avoid uncomfortable feeling in impression making in customer experience. It can be seen from the above advantages that the technology is certainly greatly developed.
An oral digital impression instrument, also called an intraoral three-dimensional scanner, is a device which directly scans the oral cavity of a patient by using a probing optical scanning head to obtain the three-dimensional morphology and color texture information of the surfaces of soft and hard tissues such as teeth, gum, mucous membrane and the like in the oral cavity. One method of the device is to adopt an active structured light triangulation imaging principle, project active light patterns by using a digital projection system, and perform three-dimensional reconstruction and splicing by algorithm processing after the patterns are acquired by a camera acquisition system.
The face scanner is a mainstream technology for acquiring face data and diagnosing oral and maxillofacial surfaces by performing three-dimensional reconstruction through an optical principle to obtain three-dimensional shape data and texture information of facial features, integrating the three-dimensional shape data and the texture information into a Digital Smile Design (DSD) workflow through 3D face scanning to replace an original 2D photo.
According to an embodiment of the present application, there is provided an embodiment of a method for controlling a three-dimensional data acquisition device, it should be noted that the steps illustrated in the flowchart of the drawings may be executed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be executed in an order different from that herein.
Fig. 1 is a flowchart of a control method of a three-dimensional data acquisition device according to an embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
step S102, obtaining limb posture data of a first target object, wherein the limb posture data comprises: three-dimensional coordinate data of the limb pose and texture image data.
The first target object is a user who is currently operating the three-dimensional data acquisition equipment. The limb gestures may be hand gestures, head gestures, five-sense-organ gestures (facial expressions) or body gestures, etc. In embodiments provided herein, the limb gesture is a hand gesture or a facial gesture.
Texture images generally refer to image texture, which is a visual feature reflecting homogeneity in an image and which embodies the tissue arrangement properties of a slowly or periodically changing surface structure of an object surface.
And step S104, inputting the limb posture data into the deep learning model for recognition to obtain the limb posture of the first target object.
And S106, acquiring a control instruction corresponding to the limb posture, wherein the control instruction is used for controlling the three-dimensional data acquisition equipment.
According to an alternative embodiment of the application, different limb gestures correspond to different control commands. For example, the fist position corresponds to an instruction to start scanning, and the palm position corresponds to an instruction to pause scanning.
And S108, controlling the three-dimensional data acquisition equipment to execute the action corresponding to the control instruction.
In the step, if the collected hand gesture of the user is a fist-making gesture, the three-dimensional data collection equipment is controlled to start scanning.
Through the steps, the limb posture of the user is recognized through the deep learning model, and then the three-dimensional data acquisition equipment is controlled through the control instruction corresponding to the limb posture of the user, so that the non-contact control of the three-dimensional data acquisition equipment is realized, the data acquisition efficiency of the three-dimensional data acquisition equipment is improved, and the technical effect of meeting the requirement of sterile operation is realized.
According to an alternative embodiment of the present application, the deep learning model is generated by: obtaining a training data set, wherein the training data set comprises: three-dimensional coordinate data of a limb posture of the second target object, texture image data of the limb posture of the second target object, and the limb posture of the second target object; constructing a neural network model; training the neural network model based on a training data set to generate a deep learning model; and evaluating the generated deep learning model.
It is understood that the second target objects refer to a plurality of target objects, and a large amount of training data is required in the training stage of the deep learning model, so that three-dimensional coordinate data and texture image data of the body postures of the plurality of target objects are required.
In the embodiment provided by the present application, the training process of the deep learning model includes the following steps:
1) Different gesture postures can be collected in the collection of the gesture data set, wherein the different gesture postures correspond to different control instructions.
2) A Neural network model is constructed, and Deep Neural Networks (DNN) are the basis of Deep learning. DNN network diagram as shown in fig. 4, the deep neural network generally includes: an input layer, a hidden layer, and an output layer.
In order to utilize limited training data as much as possible, the data are promoted through a series of random transformations, so that any two identical pictures do not exist in the model, and the method is favorable for inhibiting overfitting and enables the generalization capability of the model to be better.
3) In the training of the neural network model, the training time of the model is relatively long due to the large data volume, so that a Graphic Processing Unit (GPU) is used for acceleration in the training process. Also after the data volume has been accelerated by the GPU, it takes only a few seconds for the processing to complete, possibly tens of minutes or even up to half an hour if the processing is done in the CPU only.
4) In the evaluation and verification of the model, the problem of over-fitting or under-fitting may exist in the model training process, so that the best result is obtained by continuously debugging and training by adjusting the size of batch data, the selection of an activation function, an optimizer, the learning rate and other parameters. In addition, the DNN may be exchanged for a more appropriate CNN convolutional neural network model for test validation.
According to another alternative embodiment of the present application, the obtaining of the training data set comprises the steps of: acquiring three-dimensional coordinate data and texture image data of a plurality of types of limb gestures of a second target object, wherein the types comprise at least one of the following: skin color, age group, gender, and occupation; and respectively acquiring three-dimensional coordinate data and texture image data of a plurality of limb postures of a plurality of types of second target objects.
In this step, three-dimensional data and texture images of the gesture poses of users of various skin colors, different age groups, different sexes, different professions are collected.
As an alternative embodiment, it is also necessary to collect three-dimensional data and texture images of a plurality of gesture gestures, for example, gesture gestures of a user extending palm, clenching fist, extending 1 finger, extending 2 fingers, extending 3 fingers, extending 4 fingers, and the like.
By the method, the recognition accuracy of the deep learning model can be improved by collecting the limb posture data of different types of users and the limb posture data of different types as the training data set.
In some optional embodiments of the present application, after the training data set is obtained, the three-dimensional coordinate data and the texture image data of the multiple limb postures of the second target object need to be labeled respectively, so as to obtain the mapping relationship between the three-dimensional coordinate data of the limb postures of the second target object and the texture image data and the corresponding limb postures of the second target object.
In the step, the collected three-dimensional data of the limb postures and big data samples of the texture images are labeled, the obtained several gesture postures are clustered and classified, and the mapping relation between the data samples and the gesture postures is determined. The more big data samples are collected, the higher the mapping convergence accuracy is, the better the feedback accuracy and timeliness are given to the user, and the user experience is correspondingly improved.
In other optional embodiments of the present application, step S104 is executed to input the data of the limb posture into the deep learning model for recognition, so as to obtain the limb posture of the first target object, and the method is implemented by: searching the limb posture corresponding to the limb posture data of the first target object from the mapping relation; and determining the searched limb posture as the limb posture of the first target object.
In the above, the mapping relationship between the data sample and the gesture posture is determined by labeling the acquired three-dimensional coordinate data of the limb posture and the big data sample of the texture image.
When the limb posture of the user operating the three-dimensional data acquisition equipment is identified, the limb posture corresponding to the limb posture data of the user is searched from the mapping relation, for example, the collected limb posture data of the user is three-dimensional coordinate data and a texture image of a fist-making posture, and the corresponding fist-making posture can be searched from the mapping relation. And further determining a control instruction corresponding to the fist making posture.
As an alternative embodiment of the present application, before generating the deep learning model based on the training data set, the three-dimensional coordinate data and the texture image data of the target limb posture further need to be selected from the three-dimensional coordinate data and the texture image data of the multiple limb postures, where a correctness rate of limb posture information recognized from the three-dimensional coordinate data and the texture image data of the target limb posture is higher than a preset threshold.
In an alternative embodiment, the posture data of a part of the gesture which is easy to cause error recognition is deleted from the training data set, and the three-dimensional coordinate data and the texture image data of the target limb posture are obtained. By the method, on the premise of ensuring the training precision of the deep learning model, the technical effect of improving the training speed of the deep learning model can be realized by removing redundant data in the training data set.
According to an alternative embodiment of the present application, the step S102 of obtaining the body posture data of the first target object is executed, which may be further implemented by: establishing a three-dimensional data model by using the three-dimensional coordinate data of the limb posture and the texture image data; the three-dimensional data model is determined as limb pose data of the first target object.
As an alternative embodiment, the limb pose data may be a reconstructed three-dimensional data model based on the three-dimensional coordinate data and the texture image data of the limb pose.
In this step, model reconstruction is performed using the three-dimensional coordinate data of the limb posture of the target object and the texture image data to obtain a three-dimensional data model, and the three-dimensional data model is used as the limb posture data of the target object.
The three-dimensional coordinate data of the limb posture of the target object and the texture image data are used for reconstructing a three-dimensional data model, so that the identification accuracy of the limb posture of the target object can be improved.
According to another alternative embodiment of the application, determining the three-dimensional data model as limb pose data of the first target object comprises the following steps: under the condition that the number of the three-dimensional data models is multiple, identifying the multiple three-dimensional data models; and determining the recognized target three-dimensional data model as the limb posture data of the first target object.
In an optional embodiment, the three-dimensional data acquired by the three-dimensional data acquisition equipment comprises human face data and hand data, the three-dimensional data model obtained based on the face data and the hand data reconstruction comprises a face three-dimensional data model and a hand three-dimensional data model, the model is identified, and the hand three-dimensional data model is extracted as limb posture data.
In other alternative embodiments of the present application, executing step S106 to obtain a control instruction corresponding to the limb posture includes obtaining at least one of the following control instructions: the operation starting instruction is used for controlling the three-dimensional data acquisition equipment to start scanning data; the operation stopping instruction is used for controlling the three-dimensional data acquisition equipment to stop scanning data; a rotation instruction for controlling the three-dimensional data acquisition device to rotate; the confirmation instruction is used for determining the action of the current instruction of the three-dimensional data acquisition equipment and controlling the three-dimensional data acquisition equipment to execute the next action; and switching instructions for switching the control object when the three-dimensional data acquisition device comprises a plurality of intraoral scanners, facial scanners, in-ear scanners, dental cast scanners, foot scanners and cone beam CT machines.
In an embodiment provided by the present application, the control instruction corresponding to the limb gesture includes: a scanning starting instruction is used for controlling three-dimensional data acquisition equipment to start acquiring data; stopping the scanning instruction, and controlling the three-dimensional data acquisition equipment to stop acquiring data; instructions to control rotation of the three-dimensional data acquisition device; and confirming and entering a control instruction of the next process, namely determining the operation currently executed by the three-dimensional data acquisition equipment, and controlling the three-dimensional data acquisition equipment to enter the next operation process.
According to an alternative embodiment of the present application, the acquisition device that acquires the limb pose data of the first target object is a facial scanner; the three-dimensional data acquisition device includes one or more of an intraoral scanner, a facial scanner, an intra-ear scanner, a dental cast scanner, a foot scanner, and a cone-beam CT machine.
It should be noted that the three-dimensional data acquisition device controlled in step S108 and the acquisition device acquiring the limb posture data of the first target object in step S102 may be a handheld scanner or a fixed scanner, and the three-dimensional data acquisition device controlled in step S108 and the acquisition device acquiring the limb posture data of the first target object in step S102 may be the same or different.
For example: the acquisition device that acquires the limb posture data of the first target object in step S102 may be a face scanner or other three-dimensional scanner, and the three-dimensional data acquisition devices controlled in step S108 include one or more of intraoral scanners, face scanners, intra-ear scanners, dental model scanners, foot scanners, and Cone Beam CT (CBCT) scanners.
When the body posture data of the first target object obtained in step S102 may be a facial scanner, and the three-dimensional data acquisition device controlled in step S108 is a medical scanner such as an intraoral scanner, inconvenience brought to data acquisition work due to a contact control scanner or a computer can be avoided, potential safety hazards caused by contact control can be reduced, and pollution and disease propagation caused by blood and saliva generated by a doctor in a treatment process can be avoided.
When the three-dimensional data collection apparatus controlled in step S108 includes a plurality of scanners, the control instruction corresponding to the limb posture further includes a switching instruction to switch the control object among the plurality of scanners included in the three-dimensional data collection apparatus.
Fig. 2 is a block diagram of a control device of a three-dimensional data acquisition apparatus according to an embodiment of the present application, and as shown in fig. 2, the control device includes:
a first obtaining module 20, configured to obtain limb posture data of the first target object, where the limb posture data includes: three-dimensional coordinate data and texture image data;
the first target object is a user who is currently operating the three-dimensional data acquisition equipment. The limb gesture may be a hand gesture, a head gesture, or a body gesture, etc. In the embodiments provided herein, the limb gesture is a hand gesture.
The recognition module 22 is configured to input the limb posture data into the deep learning model for recognition, so as to obtain a limb posture of the first target object;
the second obtaining module 24 is configured to obtain a control instruction corresponding to the limb posture, where the control instruction is used to control the three-dimensional data acquisition device;
according to an alternative embodiment of the application, different limb gestures correspond to different control commands. For example, the fist position corresponds to an instruction to start scanning, and the palm position corresponds to an instruction to pause scanning.
And the control module 26 is used for controlling the three-dimensional data acquisition equipment to execute actions corresponding to the control instructions.
It should be noted that, reference may be made to the description related to the embodiment shown in fig. 1 for a preferred implementation of the embodiment shown in fig. 2, and details are not repeated here.
Fig. 3 is a block diagram of a three-dimensional data acquisition apparatus according to an embodiment of the present application, and as shown in fig. 3, the three-dimensional data acquisition apparatus includes: an acquisition device 30 and a processor 32, wherein,
the acquisition device 30 is connected with the processor 32, and is used for acquiring the limb posture data of the target object and sending the limb posture data to the processor 32, wherein the limb posture data includes: three-dimensional coordinate data and texture image data of the limb posture;
in one embodiment provided by the present application, the acquisition device 30 is a camera installed on the three-dimensional data acquisition equipment. The target object is a user who is currently operating the three-dimensional data acquisition equipment. The limb gesture may be a hand gesture, a head gesture, or a body gesture, etc. In the embodiments provided herein, the limb gesture is a hand gesture.
And a processor 32 for executing the above control method of the three-dimensional data acquisition device.
It should be noted that, reference may be made to the description related to the embodiment shown in fig. 1 for a preferred implementation of the embodiment shown in fig. 3, and details are not described here again.
The embodiment of the application also provides a nonvolatile storage medium, which comprises a stored program, wherein the control method of the three-dimensional data acquisition equipment is implemented by controlling the equipment where the nonvolatile storage medium is located when the program runs.
The nonvolatile storage medium stores a program for executing the following functions: acquiring limb posture data of a first target object, wherein the limb posture data comprises: three-dimensional coordinate data and texture image data of the limb posture; inputting the limb posture data into a deep learning model for recognition to obtain a limb posture of a first target object; acquiring a control instruction corresponding to the limb posture, wherein the control instruction is used for controlling the three-dimensional data acquisition equipment; and controlling the three-dimensional data acquisition equipment to execute the action corresponding to the control instruction.
The embodiment of the present application further provides a processor, where the processor is configured to run a program stored in a memory, and when the program is run, the processor performs the above control of the three-dimensional data acquisition device.
The processor is used for running a program for executing the following functions: acquiring limb posture data of a first target object, wherein the limb posture data comprises: three-dimensional coordinate data and texture image data of the limb posture; inputting the limb posture data into a deep learning model for recognition to obtain a limb posture of a first target object; acquiring a control instruction corresponding to the limb posture, wherein the control instruction is used for controlling the three-dimensional data acquisition equipment; and controlling the three-dimensional data acquisition equipment to execute the action corresponding to the control instruction.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technical content can be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be an indirect coupling or communication connection through some interfaces, units or modules, and may be electrical or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application, in essence or part of the technical solutions contributing to the related art, or all or part of the technical solutions, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.
Claims (13)
1. A control method of a three-dimensional data acquisition device, characterized by comprising:
acquiring limb posture data of a first target object, wherein the limb posture data comprises: three-dimensional coordinate data and texture image data of limb postures;
inputting the limb posture data into a deep learning model for recognition to obtain a limb posture of the first target object;
acquiring a control instruction corresponding to the limb posture, wherein the control instruction is used for controlling a three-dimensional data acquisition device;
and controlling the three-dimensional data acquisition equipment to execute the action corresponding to the control instruction.
2. The method of claim 1, wherein the deep learning model is generated by:
obtaining a training data set, wherein the training data set comprises: three-dimensional coordinate data of a limb pose of a second target object, texture image data of a limb pose of the second target object, and a limb pose of the second target object;
constructing a neural network model;
training the neural network model based on the training data set to generate the deep learning model;
evaluating the generated deep learning model.
3. The method of claim 2, wherein obtaining a training data set comprises:
acquiring a plurality of types of three-dimensional coordinate data of the limb pose of the second target object and texture image data, wherein the types include at least one of: skin tone, age group, gender, and occupation;
and respectively acquiring three-dimensional coordinate data and texture image data of a plurality of limb postures of the plurality of types of second target objects.
4. The method of claim 3, wherein after acquiring the training data set, the method further comprises:
and respectively labeling the three-dimensional coordinate data and the texture image data of the plurality of limb postures of the second target object to obtain the mapping relation between the three-dimensional coordinate data and the texture image data of the limb postures of the second target object and the corresponding limb postures of the second target object.
5. The method of claim 4, wherein inputting the data of the limb gesture into a deep learning model for recognition to obtain the limb gesture of the first target object comprises:
searching the body posture corresponding to the body posture data of the first target object from the mapping relation;
and determining the searched limb posture as the limb posture of the first target object.
6. The method of claim 3, wherein prior to generating the deep learning model based on the training dataset, the method further comprises:
and selecting the three-dimensional coordinate data and the texture image data of the target limb posture from the three-dimensional coordinate data and the texture image data of the multiple limb postures, wherein the accuracy of the limb posture information identified from the three-dimensional coordinate data and the texture image data of the target limb posture is higher than a preset threshold value.
7. The method of claim 1, wherein obtaining limb pose data for the first target object further comprises:
establishing a three-dimensional data model by using the three-dimensional coordinate data of the limb posture and the texture image data;
determining the three-dimensional data model as limb pose data of the first target object.
8. The method of claim 7, wherein determining the three-dimensional data model as limb pose data of the first target object comprises:
under the condition that the number of the three-dimensional data models is multiple, identifying the multiple three-dimensional data models;
and determining the identified target three-dimensional data model as the limb posture data of the first target object.
9. The method of claim 1,
the acquisition equipment for acquiring the limb posture data of the first target object is a face scanner;
the three-dimensional data acquisition device includes one or more of an intraoral scanner, a facial scanner, an in-ear scanner, a dental cast scanner, a foot scanner, and a cone-beam CT machine.
10. The method of claim 1, wherein obtaining the control command corresponding to the limb gesture comprises obtaining at least one of:
a start operation instruction for controlling the three-dimensional data acquisition equipment to start scanning data;
the operation stopping instruction is used for controlling the three-dimensional data acquisition equipment to stop scanning data;
a rotation instruction for controlling rotation of the three-dimensional data acquisition device;
a confirmation instruction, which is used for determining the action of the current instruction of the three-dimensional data acquisition equipment and controlling the three-dimensional data acquisition equipment to execute the next action;
switching instructions for switching a control object when the three-dimensional data acquisition device includes a plurality of intraoral scanners, facial scanners, in-ear scanners, dental cast scanners, foot scanners, and cone beam CT machines.
11. A control device of a three-dimensional data acquisition device, comprising:
a first obtaining module, configured to obtain limb posture data of a first target object, where the limb posture data includes: three-dimensional coordinate data and texture image data;
the recognition module is used for inputting the limb posture data into a deep learning model for recognition to obtain a limb posture of the first target object;
the second acquisition module is used for acquiring a control instruction corresponding to the limb posture, wherein the control instruction is used for controlling the three-dimensional data acquisition equipment;
and the control module is used for controlling the three-dimensional data acquisition equipment to execute the action corresponding to the control instruction.
12. A three-dimensional data acquisition device, comprising: a collection device and a processor, wherein,
the acquisition device is connected with the processor and used for acquiring the limb posture data of the target object and sending the limb posture data to the processor, wherein the limb posture data comprises: three-dimensional coordinate data and texture image data of the limb posture;
the processor is configured to execute the control method of the three-dimensional data acquisition apparatus according to any one of claims 1 to 10.
13. A non-volatile storage medium, characterized in that the non-volatile storage medium includes a stored program, wherein when the program runs, the non-volatile storage medium is controlled to execute the control method of the three-dimensional data acquisition device according to any one of claims 1 to 10.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211214603.2A CN115530855A (en) | 2022-09-30 | 2022-09-30 | Control method and device of three-dimensional data acquisition equipment and three-dimensional data acquisition equipment |
PCT/CN2023/117804 WO2024067027A1 (en) | 2022-09-30 | 2023-09-08 | Control method and apparatus for three-dimensional data acquisition device, and three-dimensional data acquisition device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211214603.2A CN115530855A (en) | 2022-09-30 | 2022-09-30 | Control method and device of three-dimensional data acquisition equipment and three-dimensional data acquisition equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115530855A true CN115530855A (en) | 2022-12-30 |
Family
ID=84731829
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211214603.2A Pending CN115530855A (en) | 2022-09-30 | 2022-09-30 | Control method and device of three-dimensional data acquisition equipment and three-dimensional data acquisition equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115530855A (en) |
WO (1) | WO2024067027A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024067027A1 (en) * | 2022-09-30 | 2024-04-04 | 先临三维科技股份有限公司 | Control method and apparatus for three-dimensional data acquisition device, and three-dimensional data acquisition device |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103425239A (en) * | 2012-05-21 | 2013-12-04 | 刘鸿达 | Control system with facial expressions as input |
CN104714638A (en) * | 2013-12-17 | 2015-06-17 | 西门子公司 | Medical technology controller |
CN205672016U (en) * | 2016-05-04 | 2016-11-09 | 长安大学 | A kind of face texture biometric scanner under three-dimensional laser based on three-dimensional CCD |
CN112149606A (en) * | 2020-10-02 | 2020-12-29 | 深圳市中安视达科技有限公司 | Intelligent control method and system for medical operation microscope and readable storage medium |
CN112241731A (en) * | 2020-12-03 | 2021-01-19 | 北京沃东天骏信息技术有限公司 | Attitude determination method, device, equipment and storage medium |
CN112306220A (en) * | 2019-07-31 | 2021-02-02 | 北京字节跳动网络技术有限公司 | Control method and device based on limb identification, electronic equipment and storage medium |
CN112418080A (en) * | 2020-11-20 | 2021-02-26 | 江苏奥格视特信息科技有限公司 | Finger action recognition method of laser scanning imager |
CN112967796A (en) * | 2019-12-13 | 2021-06-15 | 深圳迈瑞生物医疗电子股份有限公司 | Non-contact control method and device for in-vitro diagnostic equipment and storage medium |
CN113362452A (en) * | 2021-06-07 | 2021-09-07 | 中南大学 | Hand gesture three-dimensional reconstruction method and device and storage medium |
CN113514008A (en) * | 2020-04-10 | 2021-10-19 | 杭州思看科技有限公司 | Three-dimensional scanning method, three-dimensional scanning system, and computer-readable storage medium |
CN114758076A (en) * | 2022-04-22 | 2022-07-15 | 北京百度网讯科技有限公司 | Training method and device for deep learning model for building three-dimensional model |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115530855A (en) * | 2022-09-30 | 2022-12-30 | 先临三维科技股份有限公司 | Control method and device of three-dimensional data acquisition equipment and three-dimensional data acquisition equipment |
-
2022
- 2022-09-30 CN CN202211214603.2A patent/CN115530855A/en active Pending
-
2023
- 2023-09-08 WO PCT/CN2023/117804 patent/WO2024067027A1/en unknown
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103425239A (en) * | 2012-05-21 | 2013-12-04 | 刘鸿达 | Control system with facial expressions as input |
CN104714638A (en) * | 2013-12-17 | 2015-06-17 | 西门子公司 | Medical technology controller |
CN205672016U (en) * | 2016-05-04 | 2016-11-09 | 长安大学 | A kind of face texture biometric scanner under three-dimensional laser based on three-dimensional CCD |
CN112306220A (en) * | 2019-07-31 | 2021-02-02 | 北京字节跳动网络技术有限公司 | Control method and device based on limb identification, electronic equipment and storage medium |
CN112967796A (en) * | 2019-12-13 | 2021-06-15 | 深圳迈瑞生物医疗电子股份有限公司 | Non-contact control method and device for in-vitro diagnostic equipment and storage medium |
CN113514008A (en) * | 2020-04-10 | 2021-10-19 | 杭州思看科技有限公司 | Three-dimensional scanning method, three-dimensional scanning system, and computer-readable storage medium |
CN112149606A (en) * | 2020-10-02 | 2020-12-29 | 深圳市中安视达科技有限公司 | Intelligent control method and system for medical operation microscope and readable storage medium |
CN112418080A (en) * | 2020-11-20 | 2021-02-26 | 江苏奥格视特信息科技有限公司 | Finger action recognition method of laser scanning imager |
CN112241731A (en) * | 2020-12-03 | 2021-01-19 | 北京沃东天骏信息技术有限公司 | Attitude determination method, device, equipment and storage medium |
CN113362452A (en) * | 2021-06-07 | 2021-09-07 | 中南大学 | Hand gesture three-dimensional reconstruction method and device and storage medium |
CN114758076A (en) * | 2022-04-22 | 2022-07-15 | 北京百度网讯科技有限公司 | Training method and device for deep learning model for building three-dimensional model |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024067027A1 (en) * | 2022-09-30 | 2024-04-04 | 先临三维科技股份有限公司 | Control method and apparatus for three-dimensional data acquisition device, and three-dimensional data acquisition device |
Also Published As
Publication number | Publication date |
---|---|
WO2024067027A1 (en) | 2024-04-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220218449A1 (en) | Dental cad automation using deep learning | |
US11735306B2 (en) | Method, system and computer readable storage media for creating three-dimensional dental restorations from two dimensional sketches | |
US12048600B2 (en) | Dental CAD automation using deep learning | |
CN113168910B (en) | Apparatus and method for operating a personal grooming or household cleaning appliance | |
WO2019141106A1 (en) | C/s architecture-based dental beautification ar smart assistance method and apparatus | |
Tian et al. | DCPR-GAN: dental crown prosthesis restoration using two-stage generative adversarial networks | |
KR20210136021A (en) | External object identification and image augmentation and/or filtering for intraoral scanning | |
JP6777917B1 (en) | Estimator, estimation system, estimation method, and estimation program | |
WO2024067027A1 (en) | Control method and apparatus for three-dimensional data acquisition device, and three-dimensional data acquisition device | |
Singi et al. | Extended arm of precision in prosthodontics: artificial intelligence | |
CN116246779B (en) | Dental diagnosis and treatment scheme generation method and system based on user image data | |
CN113257372A (en) | Oral health management related system, method, device and equipment | |
KR20200058316A (en) | Automatic tracking method of cephalometric point of dental head using dental artificial intelligence technology and service system | |
JP6771687B1 (en) | Estimator, estimation system, estimation method, and estimation program | |
JP7509371B2 (en) | Estimation device, estimation method, and estimation program | |
JP7496995B2 (en) | Estimation device, estimation method, and estimation program | |
JP6777916B1 (en) | Estimator, estimation system, estimation method, and estimation program | |
JP7428634B2 (en) | Control device, control method, and control program | |
US11992380B2 (en) | System and method for providing a pediatric crown | |
EP4202829A1 (en) | System and method for user anatomy and physiology imaging | |
CN115810099B (en) | Image fusion device for virtual immersion type depression treatment system | |
Takács et al. | Facial modeling for plastic surgery using magnetic resonance imagery and 3D surface data | |
EP4453866A1 (en) | System and method for user anatomy and physiology imaging | |
KR20240134031A (en) | Method for determining the color of an object, device for doing so and recording medium recording the command | |
CN111656382A (en) | Advertisement prompting method and advertisement prompting system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |