WO2020177348A1 - 用于生成三维模型的方法和装置 - Google Patents

用于生成三维模型的方法和装置 Download PDF

Info

Publication number
WO2020177348A1
WO2020177348A1 PCT/CN2019/113902 CN2019113902W WO2020177348A1 WO 2020177348 A1 WO2020177348 A1 WO 2020177348A1 CN 2019113902 W CN2019113902 W CN 2019113902W WO 2020177348 A1 WO2020177348 A1 WO 2020177348A1
Authority
WO
WIPO (PCT)
Prior art keywords
modeled
skeleton
medical image
model
dimensional
Prior art date
Application number
PCT/CN2019/113902
Other languages
English (en)
French (fr)
Inventor
田飞
Original Assignee
百度在线网络技术(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百度在线网络技术(北京)有限公司 filed Critical 百度在线网络技术(北京)有限公司
Publication of WO2020177348A1 publication Critical patent/WO2020177348A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the embodiments of the application relate to the field of computer technology, and in particular to a method and device for generating a three-dimensional model.
  • Gray filter is to turn a picture into a grayscale image
  • Invert filter is to invert all the visual properties of the object, including color, saturation, and brightness
  • Xray filter is to make the object reflect its outline and convert these The outline is highlighted, which is the so-called X-ray film.
  • X-ray examination is the most commonly used medical imaging examination method.
  • the patient's X-ray film can reflect the patient's condition.
  • X-rays require people with professional medical knowledge (such as doctors) to understand them, and most patients have poor three-dimensional perception ability and cannot determine their own condition by directly observing X-rays. Therefore, the doctor needs to observe the patient's X-ray film and explain the patient's condition.
  • the embodiment of the present application proposes a method and device for generating a three-dimensional model.
  • an embodiment of the present application provides a method for generating a three-dimensional model, including: acquiring a medical image to be modeled obtained by shooting an object to be modeled; and inputting the medical image to be modeled into a pre-trained skeleton
  • the parameter generation model is used to obtain the skeleton parameters of the object to be modeled, where the skeleton parameter generation model is used to generate the skeleton parameters of the object in the medical image; the standard three-dimensional skeleton model is adjusted based on the skeleton parameters of the object to be modeled to obtain The three-dimensional skeleton model of the model object.
  • the skeleton parameter generation model includes a feature extraction network and a fitting network.
  • inputting the medical image to be modeled into a pre-trained skeleton parameter generation model to obtain the skeleton parameters of the object to be modeled includes: inputting the medical image to be modeled into a feature extraction network to obtain the object to be modeled The skeleton features of the object to be modeled; the skeleton features of the object to be modeled are input to the fitting network to obtain the skeleton parameters of the object to be modeled.
  • the method further includes: analyzing the medical image to be modeled to determine the position of the abnormal part of the object to be modeled in the medical image to be modeled; and performing analysis on the position of the abnormal part in the medical image to be modeled Projection transformation is used to obtain the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled; based on the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled, the abnormal part in the three-dimensional skeleton model of the object to be modeled is marked.
  • analyzing the medical image to be modeled to determine the position of the abnormal part of the object to be modeled in the medical image to be modeled includes: inputting the medical image to be modeled into a pre-trained classification network to obtain The position of the abnormal part of the modeling object in the medical image to be modeled and the abnormal category of the abnormal part.
  • the method further includes: collecting anomaly related information from a pre-stored collection
  • the abnormality-related information corresponding to the abnormality category of the abnormal part is queried in the database.
  • the abnormality-related information in the abnormality-related information set includes the introduction information and improvement information of the abnormality category; the abnormality-related information corresponding to the abnormality category of the abnormality part is to be modeled
  • the abnormal parts in the three-dimensional skeleton model of the object are associated.
  • the skeleton parameter generation model is trained through the following steps: obtaining training samples, where the training samples include sample medical images and corresponding sample skeleton parameters; taking the sample medical images as input and the sample skeleton parameters as output, and training Obtain the skeleton parameter generation model.
  • an embodiment of the present application provides an apparatus for generating a three-dimensional model, including: an acquiring unit configured to acquire a medical image to be modeled obtained by shooting an object to be modeled; and a generating unit configured to Input the medical image to be modeled into the pre-trained skeleton parameter generation model to obtain the skeleton parameters of the object to be modeled, wherein the skeleton parameter generation model is used to generate the skeleton parameters of the object in the medical image; the adjustment unit is configured to be based on The skeleton parameters of the object to be modeled are adjusted to the standard three-dimensional skeleton model to obtain the three-dimensional skeleton model of the object to be modeled.
  • the skeleton parameter generation model includes a feature extraction network and a fitting network.
  • the generating unit includes: an extraction subunit configured to input the medical image to be modeled into the feature extraction network to obtain the skeleton feature of the object to be modeled; the fitting subunit is configured to The skeleton features of the object are input to the fitting network to obtain the skeleton parameters of the object to be modeled.
  • the device further includes: a determining unit configured to analyze the medical image to be modeled and determine the position of the abnormal part of the object to be modeled in the medical image to be modeled; and the transforming unit is configured to Projection transformation is performed on the position of the abnormal part in the medical image to be modeled to obtain the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled; the labeling unit is configured based on the abnormal part in the three-dimensional skeleton model of the object to be modeled Mark the abnormal part in the 3D skeleton model of the object to be modeled.
  • the determining unit is further configured to: input the medical image to be modeled into a pre-trained classification network to obtain the position of the abnormal part of the object to be modeled in the medical image to be modeled and the abnormal category of the abnormal part .
  • the device further includes a query unit configured to query the abnormality related information corresponding to the abnormality category of the abnormal part from the abnormality related information set stored in advance, wherein the abnormality related information in the abnormality related information set Including the introduction information and improvement information of the abnormal category; the association unit is configured to associate the abnormality related information corresponding to the abnormal category of the abnormal part with the abnormal part in the three-dimensional skeleton model of the object to be modeled.
  • the skeleton parameter generation model is trained through the following steps: obtaining training samples, where the training samples include sample medical images and corresponding sample skeleton parameters; taking the sample medical images as input and the sample skeleton parameters as output, and training Obtain the skeleton parameter generation model.
  • an embodiment of the present application provides a server, which includes: one or more processors; a storage device on which one or more programs are stored; when one or more programs are processed by one or more The processor executes, so that one or more processors implement the method described in any implementation manner of the first aspect.
  • an embodiment of the present application provides a computer-readable medium on which a computer program is stored, and when the computer program is executed by a processor, the method as described in any implementation manner in the first aspect is implemented.
  • the method and device for generating a three-dimensional model provided by the embodiments of the present application, after acquiring the medical image to be modeled obtained by shooting the object to be modeled, input the medical image to be modeled into the skeleton parameter generation model to obtain the The skeleton parameters of the modeled object; then the standard three-dimensional skeleton model is adjusted based on the skeleton parameters of the object to be modeled to obtain the three-dimensional skeleton model of the object to be modeled.
  • the skeleton parameter is generated based on the skeleton parameter generation model, and the standard 3D skeleton parameter model is adjusted based on the skeleton parameter, so that the 3D skeleton model can be obtained quickly.
  • generating a three-dimensional skeleton model based on medical images for three-dimensional display is more intuitive and easy to understand.
  • Figure 1 is an exemplary system architecture in which the present application can be applied
  • Fig. 2 is a flowchart of an embodiment of a method for generating a three-dimensional model according to the present application
  • Fig. 3 is a flowchart of another embodiment of a method for generating a three-dimensional model according to the present application
  • FIG. 4 is a schematic diagram of an application scenario of the method for generating a three-dimensional model shown in FIG. 3;
  • Fig. 5 is a schematic structural diagram of an embodiment of an apparatus for generating a three-dimensional model according to the present application
  • Fig. 6 is a schematic structural diagram of a computer system suitable for implementing a server according to an embodiment of the present application.
  • FIG. 1 shows an exemplary system architecture 100 to which the method for generating a three-dimensional model of the present application or an embodiment of an apparatus for generating a three-dimensional model can be applied.
  • the system architecture 100 may include a terminal device 101, a network 102, and a server 103.
  • the network 102 is used to provide a medium of a communication link between the terminal device 101 and the server 103.
  • the network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables.
  • the user can use the terminal device 101 to interact with the server 103 through the network 102 to receive or send messages and so on.
  • Various client software such as image processing software, can be installed on the terminal device 101.
  • the terminal device 101 may be hardware or software.
  • the terminal device 101 may be various electronic devices that have a display screen and support three-dimensional model display. Including but not limited to smart phones, tablet computers, laptop portable computers and desktop computers, etc.
  • the terminal device 101 is software, it can be installed in the aforementioned electronic device. It can be implemented as multiple software or software modules, or as a single software or software module. There is no specific limitation here.
  • the server 103 may be a server that provides various services.
  • image processing server may analyze and process the acquired medical images and other data to be modeled, generate processing results (for example, a three-dimensional skeleton model of the object to be modeled), and push the processing results to the terminal device 101.
  • the server 103 may be hardware or software.
  • the server 103 can be implemented as a distributed server cluster composed of multiple servers, or as a single server.
  • the server 103 is software, it can be implemented as multiple software or software modules (for example, to provide distributed services), or as a single software or software module. There is no specific limitation here.
  • the method for generating a three-dimensional model provided by the embodiments of the present application is generally executed by the server 103, and accordingly, the device for generating a three-dimensional model is generally set in the server 103.
  • terminal devices, networks, and servers in FIG. 1 are merely illustrative. According to implementation needs, there can be any number of terminal devices, networks and servers.
  • FIG. 2 shows a flow 200 of an embodiment of the method for generating a three-dimensional model according to the present application.
  • the method for generating a 3D model includes the following steps:
  • Step 201 Acquire medical images to be modeled obtained by shooting an object to be modeled.
  • the execution subject of the method for generating a three-dimensional model can obtain the medical image to be modeled obtained by shooting the object to be modeled.
  • the medical image shooting device can shoot the object to be modeled to obtain the medical image to be modeled.
  • the above-mentioned execution subject may obtain the medical image to be modeled from a medical image shooting device or a terminal device (such as the terminal device 101 shown in FIG. 1) that stores the medical image to be modeled captured by the medical image shooting device.
  • the objects to be modeled may include, but are not limited to, parts of humans, parts of animals, and so on.
  • the parts of a person can include parts of the whole body of a person, and can also include parts of a person.
  • the local parts of a person may include, but are not limited to, the person's head, chest, legs, feet, shoulders, hands, elbows, and so on.
  • the part of an animal may include the whole body part of the animal, and may also include the local part of the animal.
  • the local parts of the animal may include, but are not limited to, the head, legs, claws, hooves, etc. of the animal.
  • the medical images to be modeled can be, for example, various two-dimensional images such as X-ray films, magnetic resonance images, and ultrasound images.
  • medical images to be modeled may include, but are not limited to, orthographic medical images, lateral medical images, and oblique medical images of the object to be modeled. Under normal circumstances, the medical image to be modeled here is an orthographic medical image of the object to be modeled.
  • Step 202 Input the medical image to be modeled into the pre-trained skeleton parameter generation model to obtain the skeleton parameter of the object to be modeled.
  • the above-mentioned execution subject may input the medical image to be modeled into the pre-trained skeleton parameter generation model to obtain the skeleton parameter of the object to be modeled.
  • the skeleton parameter generation model can be used to generate the skeleton parameters of the object in the medical image.
  • the skeletal parameter may be the three-dimensional description data of the part of the object to be modeled, including but not limited to the length, width and height data of the part of the object to be modeled.
  • the skeleton parameters may include, but are not limited to, head length, width and height data, chest length, width and height data, leg length, width and height data, and foot length, width and height data. , Shoulder length, width and height data, hand length, width and height data, elbow length, width and height data, etc.
  • the above-mentioned execution body may pre-collect a large number of medical images and skeleton parameters of objects of the same category as the object to be modeled, and store them accordingly to generate a correspondence table as a skeleton parameter generation model .
  • the above-mentioned execution subject can first calculate the similarity between the medical image to be modeled and each medical image in the correspondence table; then, based on the calculated similarity, from the correspondence table Find out the skeleton parameters of the object to be modeled. For example, the above-mentioned execution subject may find the skeleton parameter of the object with the highest similarity to the medical image to be modeled from the correspondence table as the skeleton parameter of the object to be modeled.
  • the skeleton parameter generation model may be obtained by supervised training of existing machine learning models (such as various artificial neural networks) using various machine learning methods and training samples. of. Specifically, the above-mentioned execution body can train the skeleton parameter generation model through the following steps:
  • each training sample may include sample medical images and corresponding sample skeleton parameters.
  • the sample medical image is a medical image obtained by shooting a sample object.
  • the sample skeleton parameter is the three-dimensional description data of the part of the sample object.
  • the sample medical image is taken as input, and the sample skeleton parameter is taken as output, and the skeleton parameter generation model is obtained by training.
  • the above-mentioned execution body may input the sample medical image from the input side of the initial skeleton parameter generation model, and output the skeleton parameters of the sample object in the sample medical image from the output side after processing of the initial skeleton parameter generation model. Subsequently, the above-mentioned execution subject may calculate the generation accuracy of the initial skeleton parameter generation model based on the skeleton parameters of the sample object and the sample skeleton parameters. If the generation accuracy does not meet the preset constraint conditions, the parameters of the initial skeleton parameter generation model are adjusted, and then the sample medical image is input to continue the model training.
  • the initial skeleton parameter generation model can be various parameter generation models of initialization parameters, such as a model composed of a feature extraction network and a fitting network.
  • the initialization parameter can be some different small random numbers.
  • the skeleton parameter generation model may include a feature extraction network and a fitting network.
  • the above-mentioned execution subject can first input the medical image to be modeled into the feature extraction network to obtain the skeleton feature of the object to be modeled; then input the skeleton feature of the object to be modeled into the fitting network to obtain the Skeleton parameters.
  • the feature extraction network may be, for example, a VGG16 model, which is used to extract the skeleton features of the object to be modeled.
  • the skeleton feature may be information describing the skeleton of the object to be modeled, including but not limited to various basic elements related to the skeleton (for example, skeleton action, skeleton outline, skeleton position, skeleton texture, etc.).
  • skeleton features can be represented by multi-dimensional vectors.
  • the fitting network can be composed of multiple convolutional layers and multiple fully connected layers for fitting skeleton parameters.
  • Step 203 Adjust the standard three-dimensional skeleton model based on the skeleton parameters of the object to be modeled to obtain the three-dimensional skeleton model of the object to be modeled.
  • the above-mentioned execution subject may adjust the standard three-dimensional skeleton model based on the skeleton parameters of the object to be modeled to obtain the three-dimensional skeleton model of the object to be modeled.
  • the standard three-dimensional skeleton model may be a three-dimensional skeleton model obtained by fusing a large number of three-dimensional skeleton models of objects of the same category as the object to be modeled.
  • the above-mentioned execution body may use, for example, a Morph machine learning algorithm to adjust the standard three-dimensional skeleton model based on the skeleton parameters of the object to be modeled to obtain the three-dimensional skeleton model of the object to be modeled.
  • the above-mentioned execution subject may also send the three-dimensional skeleton model of the object to be modeled to the terminal device of the object to be modeled.
  • the terminal device can display the three-dimensional skeleton model of the object to be modeled stereoscopically for viewing of the object to be modeled.
  • the above-mentioned execution subject may also send the three-dimensional skeleton model of the object to be modeled to the terminal of the owner of the object to be modeled equipment.
  • the terminal device can display the three-dimensional skeleton model of the object to be modeled stereoscopically for viewing by the owner of the object to be modeled.
  • the medical image to be modeled is input into the skeleton parameter generation model to obtain the model to be modeled
  • the skeleton parameters of the object then the standard 3D skeleton model is adjusted based on the skeleton parameters of the object to be modeled to obtain the 3D skeleton model of the object to be modeled.
  • the skeleton parameter is generated based on the skeleton parameter generation model, and the standard 3D skeleton parameter model is adjusted based on the skeleton parameter, so that the 3D skeleton model can be obtained quickly.
  • generating a three-dimensional skeleton model based on medical images for three-dimensional display is more intuitive and easy to understand.
  • FIG. 3 shows a process 300 of another embodiment of the method for generating a three-dimensional model according to the present application.
  • the method for generating a 3D model includes the following steps:
  • Step 301 Acquire medical images to be modeled obtained by shooting the object to be modeled.
  • Step 302 Input the medical image to be modeled into the pre-trained skeleton parameter generation model to obtain the skeleton parameter of the object to be modeled.
  • Step 303 Adjust the standard three-dimensional skeleton model based on the skeleton parameters of the object to be modeled to obtain the three-dimensional skeleton model of the object to be modeled.
  • steps 301-303 have been described in detail in steps 201-203 in the embodiment shown in FIG. 2, and will not be repeated here.
  • Step 304 Analyze the medical image to be modeled, and determine the position of the abnormal part of the object to be modeled in the medical image to be modeled.
  • the execution subject of the method for generating a three-dimensional model can analyze the medical image to be modeled to determine that the abnormal part of the object to be modeled is in the medical image to be modeled. In the location.
  • the above-mentioned execution subject can analyze the skeleton parameters of each part of the object to be modeled in the medical image to be modeled. If the skeleton parameter of a certain part is abnormal, it means that the part is an abnormal part. Among them, the abnormal part may be the part where the disease occurs.
  • the above-mentioned execution subject may input the medical image to be modeled into a pre-trained classification network to obtain the position and abnormality of the abnormal part of the object to be modeled in the medical image to be modeled.
  • the classification network can be used to identify abnormal parts.
  • classification networks can be obtained by supervised training of existing machine learning models (for example, various artificial neural networks, etc.) using various machine learning methods and training samples.
  • the classification network can be composed of three convolutional layers and two fully connected layers. The number of feature channels of the three convolutional layers is 32, 64, 128 from the front to the back. The resolution of the feature map is 64, 32, and 16 from the front to the back.
  • the first fully connected layer can output a 256-dimensional vector
  • the second fully connected layer can output a dimensional vector of the number of abnormal categories of abnormal parts plus one (plus one is to increase the confidence node without anomalies).
  • the abnormal category may be the category of the site where the lesion occurs.
  • the abnormal categories may include, but are not limited to, frozen shoulder, shoulder dislocation, and so on.
  • a sub-image of that part needs to be extracted from the medical image to be modeled, and input to the classification network for abnormality recognition. For example, if you want to determine whether the shoulder is abnormal, you need to cut out a square picture of the shoulder from the medical image to be modeled, and scale it to a preset size (for example, 128 ⁇ 128 resolution), and input it to the classification network to output The probability of various lesions in the shoulder.
  • Step 305 Perform projection transformation on the position of the abnormal part in the medical image to be modeled to obtain the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled.
  • the above-mentioned execution subject may perform projection transformation on the position of the abnormal part in the medical image to be modeled to obtain the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled.
  • the projection transformation can transform the coordinates of the abnormal part in the medical image to be modeled of the object to be modeled into the coordinates in the three-dimensional skeleton model of the object to be modeled.
  • Step 306 Based on the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled, mark the abnormal part in the three-dimensional skeleton model of the object to be modeled.
  • the above-mentioned execution subject can first find the abnormal part in the three-dimensional skeleton model of the object to be modeled based on the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled; Mark it to distinguish it from the normal part.
  • the above-mentioned execution subject may mark the abnormal part with a different color from the normal part.
  • Step 307 Query the abnormality-related information corresponding to the abnormality category of the abnormal part from the pre-stored abnormality-related information set.
  • the above-mentioned execution subject may search for the abnormality-related information corresponding to the abnormality category of the abnormal part from the pre-stored abnormality-related information set.
  • the abnormality related information in the abnormality related information set may include introduction information and improvement information of the abnormality category.
  • Step 308 Associate the relevant improvement information corresponding to the abnormal category of the abnormal part with the abnormal part in the three-dimensional skeleton model of the object to be modeled.
  • the above-mentioned execution subject may associate the abnormality related information corresponding to the abnormality category of the abnormal part with the abnormal part in the three-dimensional skeleton model of the object to be modeled. For example, when the abnormal part in the three-dimensional skeleton model of the object to be modeled is not clicked, the abnormal information corresponding to the abnormal category of the abnormal part is hidden. When a click operation is performed on an abnormal part in the three-dimensional skeleton model of the object to be modeled, the abnormal information corresponding to the abnormal category of the abnormal part is displayed.
  • FIG. 4 is a schematic diagram of an application scenario of the method for generating a three-dimensional model shown in FIG.
  • the X-ray film shooting device 410 captures the patient's X-ray film 401 and sends it to the patient's mobile phone 420.
  • the patient opens the image processing software in the mobile phone 420, selects the X-ray film 401, and clicks the upload button.
  • the patient's mobile phone 420 can send the X-ray film 401 to the server 430.
  • the server 430 can first input the X-ray film 401 to the skeleton parameter generation model 402 to obtain the patient's skeleton parameter 403; then adjust the standard three-dimensional skeleton model 404 based on the patient's skeleton parameter 403 to obtain The patient’s three-dimensional skeleton model 405; then the three-dimensional skeleton model 405 is analyzed to determine that the patient suffers from frozen shoulder; then the projection transformation is performed based on the position of the shoulder in the X-ray film 401 to obtain the shoulder in the three-dimensional skeleton model 405 Position, and annotate the shoulder in the three-dimensional skeleton model 405; then find out the disease information and treatment information 407 of frozen shoulder from the shoulder lesion related information collection 406, and associate it with the shoulder in the three-dimensional skeleton model 405 ; Finally, the three-dimensional skeleton model 405 is sent to the patient's mobile phone 420.
  • the patient can view his own three-dimensional skeleton model 405 on the mobile phone 420.
  • the patient performs a click operation on the shoulder in the three-dimensional skeleton model 405, the disease information and treatment information 407 of frozen shoulder can be displayed.
  • the process 300 of the method for generating a three-dimensional model in this embodiment adds steps 304-308. Therefore, the solution described in this embodiment can mark the abnormal part in the three-dimensional skeleton model of the object to be modeled, which is convenient for determining the abnormal part.
  • the corresponding abnormality-related information is associated at the abnormal location to facilitate understanding of the abnormality category of the abnormal location and improve the abnormal location.
  • this application provides an embodiment of a device for generating a three-dimensional model.
  • the device embodiment corresponds to the method embodiment shown in Fig. 2.
  • the device can be applied to various electronic devices.
  • the apparatus 500 for generating a three-dimensional model of this embodiment may include: an acquiring unit 501, a generating unit 502, and an adjusting unit 503.
  • the acquiring unit 501 is configured to acquire the medical image to be modeled obtained by shooting the object to be modeled
  • the generating unit 502 is configured to input the medical image to be modeled into the pre-trained skeleton parameter generation model to obtain the The skeleton parameters of the modeled object, wherein the skeleton parameter generation model is used to generate the skeleton parameters of the object in the medical image
  • the adjustment unit 503 is configured to adjust the standard three-dimensional skeleton model based on the skeleton parameters of the object to be modeled to obtain The three-dimensional skeleton model of the modeling object.
  • step 201 in the corresponding embodiment in FIG. 2 respectively.
  • step 202 and step 203 will not be repeated here.
  • the skeleton parameter generation model includes a feature extraction network and a fitting network.
  • the generating unit 502 includes: an extraction subunit (not shown in the figure), configured to input the medical image to be modeled into the feature extraction network to obtain the Skeleton features; a fitting subunit (not shown in the figure), configured to input the skeleton features of the object to be modeled into the fitting network to obtain the skeleton parameters of the object to be modeled.
  • the device 500 for generating a three-dimensional model further includes: a determining unit (not shown in the figure), configured to analyze the medical image to be modeled and determine the object to be modeled The position of the abnormal part in the medical image to be modeled; the transformation unit (not shown in the figure) is configured to perform projection transformation on the position of the abnormal part in the medical image to be modeled to obtain the abnormal part in the object to be modeled The position in the three-dimensional skeleton model of the object; the labeling unit (not shown in the figure) is configured to perform the abnormal position in the three-dimensional skeleton model of the object to be modeled based on the position of the abnormal part in the three-dimensional skeleton model of the object to be modeled Label.
  • the determining unit is further configured to: input the medical image to be modeled into a pre-trained classification network to obtain the abnormal part of the object to be modeled in the medical image to be modeled Location and abnormality category of abnormal parts.
  • the apparatus 500 for generating a three-dimensional model further includes: a query unit (not shown in the figure), configured to query the abnormal location from a pre-stored abnormality related information set The abnormality related information corresponding to the abnormality category of the abnormality related information, wherein the abnormality related information in the abnormality related information set includes the introduction information and improvement information of the abnormality category; the association unit (not shown in the figure) is configured to correspond to the abnormality category of the abnormal part The abnormal information of is associated with the abnormal part in the three-dimensional skeleton model of the object to be modeled.
  • the skeleton parameter generation model is obtained by training in the following steps: obtaining training samples, where the training samples include sample medical images and corresponding sample skeleton parameters; taking the sample medical images as input, and The sample skeleton parameters are used as output, and the skeleton parameter generation model is obtained through training.
  • FIG. 6 shows a schematic structural diagram of a computer system 600 suitable for implementing a server (for example, the server 103 shown in FIG. 1) of an embodiment of the present application.
  • the server shown in FIG. 6 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present application.
  • the computer system 600 includes a central processing unit (CPU) 601, which can be based on a program stored in a read only memory (ROM) 602 or a program loaded from a storage part 608 into a random access memory (RAM) 603 And perform various appropriate actions and processing.
  • the RAM 603 also stores various programs and data required for the operation of the system 600.
  • the CPU 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
  • An input/output (I/O) interface 605 is also connected to the bus 604.
  • the following components are connected to the I/O interface 605: an input part 606 including a keyboard, a mouse, etc.; an output part 607 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and speakers, etc.; a storage part 608 including a hard disk, etc. ; And a communication section 609 including a network interface card such as a LAN card, a modem, etc. The communication section 609 performs communication processing via a network such as the Internet.
  • the driver 610 is also connected to the I/O interface 605 as needed.
  • a removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 610 as needed, so that the computer program read from it is installed into the storage part 608 as needed.
  • the process described above with reference to the flowchart can be implemented as a computer software program.
  • the embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication part 609, and/or installed from the removable medium 611.
  • the central processing unit (CPU) 601 the above-mentioned functions defined in the method of the present application are executed.
  • the computer-readable medium described in this application may be a computer-readable signal medium or a computer-readable medium or any combination of the two.
  • the computer-readable medium may be, for example, but not limited to, an electric, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above. More specific examples of computer-readable media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable Read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the computer-readable medium can be any tangible medium that contains or stores a program, and the program can be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein.
  • This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable medium, and the computer-readable medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device.
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wireless, wire, optical cable, RF, etc., or any suitable combination of the above.
  • the computer program code used to perform the operations of this application can be written in one or more programming languages or a combination thereof.
  • the programming languages include object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional Procedural programming language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user’s computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider for example, using an Internet service provider to pass Internet connection.
  • each block in the flowchart or block diagram can represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logical function Executable instructions.
  • the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present application can be implemented in software or hardware.
  • the described unit may also be provided in the processor.
  • a processor includes an acquiring unit, a generating unit, and an adjusting unit.
  • the names of these units do not constitute a limitation on the unit itself under certain circumstances.
  • the acquisition unit can also be described as "a unit for acquiring medical images to be modeled obtained by shooting an object to be modeled".
  • the present application also provides a computer-readable medium.
  • the computer-readable medium may be included in the server described in the foregoing embodiment; or may exist alone without being assembled into the server.
  • the above-mentioned computer-readable medium carries one or more programs.
  • the server obtains the medical image to be modeled obtained by shooting the object to be modeled;
  • the medical image is input to the pre-trained skeleton parameter generation model to obtain the skeleton parameters of the object to be modeled.
  • the skeleton parameter generation model is used to generate the skeleton parameters of the object in the medical image;
  • the skeleton model is adjusted to obtain the three-dimensional skeleton model of the object to be modeled.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例公开了用于生成三维模型的方法和装置。该方法的一具体实施方式包括:获取对待建模对象进行拍摄所得到的待建模医学影像;将待建模医学影像输入至预先训练的骨架参数生成模型,得到待建模对象的骨架参数,其中,骨架参数生成模型用于生成医学影像中的对象的骨架参数;基于待建模对象的骨架参数对标准三维骨架模型进行调整,得到待建模对象的三维骨架模型。该实施方式基于骨架参数生成模型生成骨架参数,基于骨架参数对标准三维骨架参数模型进行调整,能够快速地得到三维骨架模型。并且,基于医学影像生成三维骨架模型进行立体展示,更加直观,便于理解。

Description

用于生成三维模型的方法和装置
本专利申请要求于2019年03月07日提交的、申请号为201910171928.9、申请人为百度在线网络技术(北京)有限公司、发明名称为“用于生成三维模型的方法和装置”的中国专利申请的优先权,该申请的全文以引用的方式并入本申请中。
技术领域
本申请实施例涉及计算机技术领域,具体涉及用于生成三维模型的方法和装置。
背景技术
Gray滤镜是把一张图片变成灰度图;Invert滤镜是把对象的可视化属性全部翻转,包括色彩、饱和度、和亮度值;Xray滤镜是让对象反映出它的轮廓并把这些轮廓加亮,也就是所谓的X光片。
目前,X光片检查是最常用的医疗影像检查方式。病人的X光片可以反映出病人的病情。然而,X光片需要拥有专业医疗知识的人(例如医生)才能看懂,而大多数病人三维立体感知能力较差,不能通过直接观察X光片的方式确定自己的病情。因此,需要医生观察病人的X光片,并向病人讲解其病情。
发明内容
本申请实施例提出了用于生成三维模型的方法和装置。
第一方面,本申请实施例提供了一种用于生成三维模型的方法,包括:获取对待建模对象进行拍摄所得到的待建模医学影像;将待建模医学影像输入至预先训练的骨架参数生成模型,得到待建模对象的骨架参数,其中,骨架参数生成模型用于生成医学影像中的对象的骨架参数;基于待建模对象的骨架参数对标准三维骨架模型进行调整, 得到待建模对象的三维骨架模型。
在一些实施例中,骨架参数生成模型包括特征提取网络和拟合网络。
在一些实施例中,将待建模医学影像输入至预先训练的骨架参数生成模型,得到待建模对象的骨架参数,包括:将待建模医学影像输入至特征提取网络,得到待建模对象的骨架特征;将待建模对象的骨架特征输入至拟合网络,得到待建模对象的骨架参数。
在一些实施例中,该方法还包括:对待建模医学影像进行分析,确定待建模对象的异常部位在待建模医学影像中的位置;对异常部位在待建模医学影像中的位置进行投影变换,得到异常部位在待建模对象的三维骨架模型中的位置;基于异常部位在待建模对象的三维骨架模型中的位置,对待建模对象的三维骨架模型中的异常部位进行标注。
在一些实施例中,对待建模医学影像进行分析,确定待建模对象的异常部位在待建模医学影像中的位置,包括:将待建模医学影像输入至预先训练的分类网络,得到待建模对象的异常部位在待建模医学影像中的位置和异常部位的异常类别。
在一些实施例中,在基于异常部位在待建模对象的三维骨架模型中的位置,对待建模对象的三维骨架模型中的异常部位进行标注之后,还包括:从预先存储的异常相关信息集合中查询出异常部位的异常类别对应的异常相关信息,其中,异常相关信息集合中的异常相关信息包括异常类别的介绍信息和改善信息;将异常部位的异常类别对应的异常相关信息与待建模对象的三维骨架模型中的异常部位相关联。
在一些实施例中,骨架参数生成模型通过如下步骤训练得到:获取训练样本,其中,训练样本包括样本医学影像和对应的样本骨架参数;将样本医学影像作为输入,将样本骨架参数作为输出,训练得到骨架参数生成模型。
第二方面,本申请实施例提供了一种用于生成三维模型的装置,包括:获取单元,被配置成获取对待建模对象进行拍摄所得到的待建模医学影像;生成单元,被配置成将待建模医学影像输入至预先训练的骨架参数生成模型,得到待建模对象的骨架参数,其中,骨架参数 生成模型用于生成医学影像中的对象的骨架参数;调整单元,被配置成基于待建模对象的骨架参数对标准三维骨架模型进行调整,得到待建模对象的三维骨架模型。
在一些实施例中,骨架参数生成模型包括特征提取网络和拟合网络。
在一些实施例中,生成单元包括:提取子单元,被配置成将待建模医学影像输入至特征提取网络,得到待建模对象的骨架特征;拟合子单元,被配置成将待建模对象的骨架特征输入至拟合网络,得到待建模对象的骨架参数。
在一些实施例中,该装置还包括:确定单元,被配置成对待建模医学影像进行分析,确定待建模对象的异常部位在待建模医学影像中的位置;变换单元,被配置成对异常部位在待建模医学影像中的位置进行投影变换,得到异常部位在待建模对象的三维骨架模型中的位置;标注单元,被配置成基于异常部位在待建模对象的三维骨架模型中的位置,对待建模对象的三维骨架模型中的异常部位进行标注。
在一些实施例中,确定单元进一步被配置成:将待建模医学影像输入至预先训练的分类网络,得到待建模对象的异常部位在待建模医学影像中的位置和异常部位的异常类别。
在一些实施例中,该装置还包括:查询单元,被配置成从预先存储的异常相关信息集合中查询出异常部位的异常类别对应的异常相关信息,其中,异常相关信息集合中的异常相关信息包括异常类别的介绍信息和改善信息;关联单元,被配置成将异常部位的异常类别对应的异常相关信息与待建模对象的三维骨架模型中的异常部位相关联。
在一些实施例中,骨架参数生成模型通过如下步骤训练得到:获取训练样本,其中,训练样本包括样本医学影像和对应的样本骨架参数;将样本医学影像作为输入,将样本骨架参数作为输出,训练得到骨架参数生成模型。
第三方面,本申请实施例提供了一种服务器,该服务器包括:一个或多个处理器;存储装置,其上存储有一个或多个程序;当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现如第 一方面中任一实现方式描述的方法。
第四方面,本申请实施例提供了一种计算机可读介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如第一方面中任一实现方式描述的方法。
本申请实施例提供的用于生成三维模型的方法和装置,在获取对待建模对象进行拍摄所得到的待建模医学影像之后,将待建模医学影像输入至骨架参数生成模型,以得到待建模对象的骨架参数;随后基于待建模对象的骨架参数对标准三维骨架模型进行调整,以得到待建模对象的三维骨架模型。基于骨架参数生成模型生成骨架参数,基于骨架参数对标准三维骨架参数模型进行调整,能够快速地得到三维骨架模型。并且,基于医学影像生成三维骨架模型进行立体展示,更加直观,便于理解。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:
图1是本申请可以应用于其中的示例性系统架构;
图2是根据本申请的用于生成三维模型的方法的一个实施例的流程图;
图3是根据本申请的用于生成三维模型的方法的又一个实施例的流程图;
图4是图3所示的用于生成三维模型的方法的一个应用场景的示意图;
图5是根据本申请的用于生成三维模型的装置的一个实施例的结构示意图;
图6是适于用来实现本申请实施例的服务器的计算机系统的结构示意图。
具体实施方式
下面结合附图和实施例对本申请作进一步的详细说明。可以理解 的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本申请。
图1示出了可以应用本申请的用于生成三维模型的方法或用于生成三维模型的装置的实施例的示例性系统架构100。
如图1所示,系统架构100中可以包括终端设备101、网络102和服务器103。网络102用以在终端设备101和服务器103之间提供通信链路的介质。网络102可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
用户可以使用终端设备101通过网络102与服务器103交互,以接收或发送消息等。终端设备101上可以安装有各种客户端软件,例如图像处理软件等。
终端设备101可以是硬件,也可以是软件。当终端设备101为硬件时,可以是具有显示屏并且支持三维模型展示的各种电子设备。包括但不限于智能手机、平板电脑、膝上型便携计算机和台式计算机等等。当终端设备101为软件时,可以安装在上述电子设备中。其可以实现成多个软件或软件模块,也可以实现成单个软件或软件模块。在此不做具体限定。
服务器103可以是提供各种服务的服务器。例如图像处理服务器。图像处理服务器可以对获取到的待建模医学影像等数据进行分析等处理,生成处理结果(例如待建模对象的三维骨架模型),并将处理结果推送给终端设备101。
需要说明的是,服务器103可以是硬件,也可以是软件。当服务器103为硬件时,可以实现成多个服务器组成的分布式服务器集群,也可以实现成单个服务器。当服务器103为软件时,可以实现成多个软件或软件模块(例如用来提供分布式服务),也可以实现成单个软件或软件模块。在此不做具体限定。
需要说明的是,本申请实施例所提供的用于生成三维模型的方法一般由服务器103执行,相应地,用于生成三维模型的装置一般设置于服务器103中。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。
继续参考图2,其示出了根据本申请的用于生成三维模型的方法的一个实施例的流程200。该用于生成三维模型的方法,包括以下步骤:
步骤201,获取对待建模对象进行拍摄所得到的待建模医学影像。
在本实施例中,用于生成三维模型的方法的执行主体(例如图1所示的服务器103)可以获取对待建模对象进行拍摄所得到的待建模医学影像。通常,医学影像拍摄设备可以对待建模对象进行拍摄,以得到待建模医学影像。上述执行主体可以从医学影像拍摄设备或存储有医学影像拍摄设备所拍摄的待建模医学影像的终端设备(例如图1所示的终端设备101)获取待建模医学影像。其中,待建模对象可以包括但不限于人的部位、动物的部位等等。人的部位可以包括人的全身部位,也可以包括人的局部部位。人的局部部位可以包括但不限于人的头、胸、腿、脚、肩、手、肘等等。同理,动物的部位可以包括动物的全身部位,也可以包括动物的局部部位。动物的局部部位可以包括但不限于动物的头、腿、爪、蹄等等。待建模医学影像可以例如是X光片、磁共振影像、超音波影像等各种二维影像。另外,待建模医学影像可以包括但不限于待建模对象的正位医学影像、侧位医学影像、斜位医学影像。在通常情况下,这里的待建模医学影像是待建模对象的正位医学影像。
步骤202,将待建模医学影像输入至预先训练的骨架参数生成模型,得到待建模对象的骨架参数。
在本实施例中,上述执行主体可以将待建模医学影像输入至预先训练的骨架参数生成模型,以得到待建模对象的骨架参数。
这里,骨架参数生成模型可以用于生成医学影像中的对象的骨架参数。骨架参数可以是待建模对象的部位的立体描述数据,包括但不 限于待建模对象的部位的长宽高数据。例如,在待建模对象是人的全身部位的情况下,骨架参数可以包括但不限于头的长宽高数据、胸的长宽高数据、腿的长宽高数据、脚的长宽高数据、肩的长宽高数据、手的长宽高数据、肘的长宽高数据等等。
在本实施例的一些可选的实现方式中,上述执行主体可以预先收集大量与待建模对象同类别的对象的医学影像和骨架参数,并对应存储,生成对应关系表,作为骨架参数生成模型。当获取到待建模医学影像之后,上述执行主体可以首先计算待建模医学影像与对应关系表中的各个医学影像之间的相似度;然后基于所计算出的相似度,从对应关系表中查找出待建模对象的骨架参数。例如,上述执行主体可以从对应关系表中查找出与待建模医学影像相似度最高的对象的骨架参数,作为待建模对象的骨架参数。
在本实施例的一些可选的实现方式中,骨架参数生成模型可以是利用各种机器学习方法和训练样本对现有的机器学习模型(例如各种人工神经网络等)进行有监督训练而得到的。具体地,上述执行主体可以通过如下步骤训练骨架参数生成模型:
首先,获取训练样本。
这里,每个训练样本可以包括样本医学影像和对应的样本骨架参数。其中,样本医学影像是对样本对象进行拍摄所得到的医学影像。样本骨架参数是样本对象的部位的立体描述数据。
然后,将样本医学影像作为输入,将样本骨架参数作为输出,训练得到骨架参数生成模型。
这里,上述执行主体可以将样本医学影像从初始骨架参数生成模型的输入侧输入,经过初始骨架参数生成模型的处理,从输出侧输出样本医学影像中的样本对象的骨架参数。随后,上述执行主体可以基于样本对象的骨架参数和样本骨架参数计算初始骨架参数生成模型的生成准确度。若生成准确度不满足预先设定的约束条件,则调整初始骨架参数生成模型的参数,随后输入样本医学影像继续进行模型训练。若生成准确度满足预先设定的约束条件,则模型训练完成,此时的初始骨架参数生成模型即为骨架参数生成模型。其中,初始骨架参数生 成模型可以是初始化参数的各种参数生成模型,例如特征提取网络和拟合网络组合而成的模型。初始化参数可以是一些不同的小随机数。
在本实施例的一些可选的实现方式中,骨架参数生成模型可以包括特征提取网络和拟合网络。此时,上述执行主体可以首先将待建模医学影像输入至特征提取网络,得到待建模对象的骨架特征;然后将待建模对象的骨架特征输入至拟合网络,得到待建模对象的骨架参数。其中,特征提取网络可以例如是VGG16模型,用于提取待建模对象的骨架特征。骨架特征可以是对待建模对象的骨架进行描述的信息,包括但不限于与骨架相关的各种基本要素(例如骨架动作、骨架轮廓、骨架位置、骨架纹理等)。通常,骨架特征可以用多维向量来表示。拟合网络可以由多个卷积层和多个全连接层组成,用于拟合骨架参数。
步骤203,基于待建模对象的骨架参数对标准三维骨架模型进行调整,得到待建模对象的三维骨架模型。
在本实施例中,上述执行主体可以基于待建模对象的骨架参数对标准三维骨架模型进行调整,以得到待建模对象的三维骨架模型。其中,标准三维骨架模型可以是对大量与待建模对象同类别的对象的三维骨架模型进行融合所得到的三维骨架模型。这里,上述执行主体可以利用例如Morph机器学习算法,来基于待建模对象的骨架参数调整标准三维骨架模型,以得到待建模对象的三维骨架模型。
在本实施例的一些可选的实现方式中,在待建模对象是人的部位的情况下,上述执行主体还可以将待建模对象的三维骨架模型发送至待建模对象的终端设备。终端设备可以对待建模对象的三维骨架模型进行立体展示,以供待建模对象查看。
在本实施例的一些可选的实现方式中,在待建模对象是动物的部位的情况下,上述执行主体还可以将待建模对象的三维骨架模型发送至待建模对象的主人的终端设备。终端设备可以对待建模对象的三维骨架模型进行立体展示,以供待建模对象的主人查看。
本申请实施例提供的用于生成三维模型的方法,在获取对待建模对象进行拍摄所得到的待建模医学影像之后,将待建模医学影像输入至骨架参数生成模型,以得到待建模对象的骨架参数;随后基于待建 模对象的骨架参数对标准三维骨架模型进行调整,以得到待建模对象的三维骨架模型。基于骨架参数生成模型生成骨架参数,基于骨架参数对标准三维骨架参数模型进行调整,能够快速地得到三维骨架模型。并且,基于医学影像生成三维骨架模型进行立体展示,更加直观,便于理解。
进一步参考图3,其示出了根据本申请的用于生成三维模型的方法的又一个实施例的流程300。该用于生成三维模型的方法,包括以下步骤:
步骤301,获取对待建模对象进行拍摄所得到的待建模医学影像。
步骤302,将待建模医学影像输入至预先训练的骨架参数生成模型,得到待建模对象的骨架参数。
步骤303,基于待建模对象的骨架参数对标准三维骨架模型进行调整,得到待建模对象的三维骨架模型。
在本实施例中,步骤301-303的具体操作已在图2所示的实施例中步骤201-203中进行了详细的介绍,在此不再赘述。
步骤304,对待建模医学影像进行分析,确定待建模对象的异常部位在待建模医学影像中的位置。
在本实施例中,用于生成三维模型的方法的执行主体(例如图1所示的服务器103)可以对待建模医学影像进行分析,以确定待建模对象的异常部位在待建模医学影像中的位置。通常,上述执行主体可以对待建模医学影像中的待建模对象的各个部位的骨架参数进行分析,若某个部位的骨架参数异常,则说明该部位是异常部位。其中,异常部位可以是发生病变的部位。
在本实施例的一些可选的实现方式中,上述执行主体可以将待建模医学影像输入至预先训练的分类网络,得到待建模对象的异常部位在待建模医学影像中的位置和异常部位的异常类别。其中,分类网络可以用于识别异常部位。通常,分类网络可以是利用各种机器学习方法和训练样本对现有的机器学习模型(例如各种人工神经网络等)进行有监督训练而得到的。例如,分类网络可以由三个卷积层和两个全连接层组成。三个卷积层的特征通道数从前往后依次是32,64,128。 特征图分辨率从前往后依次是64,32,16。第一个全连接层可以输出256维向量,第二个全连接层可以输出异常部位的异常类别数量加一(加一是为了增加无异常的置信度节点)维向量。其中,异常类别可以是部位发生病变的类别。例如,对于肩来说,其异常类别可以包括但不限于肩周炎,肩部脱臼等等。
通常,对于待建模对象的每一个部位,需要从待建模医学影像中抠出该部位的子图,并输入至分类网络进行异常识别。例如,若想要判断肩是否异常,需要在从待建模医学影像中抠除一个肩部的正方形图片,并缩放至预设尺寸(例如128×128分辨率),输入至分类网络,以输出肩部发生各种病变的概率。
步骤305,对异常部位在待建模医学影像中的位置进行投影变换,得到异常部位在待建模对象的三维骨架模型中的位置。
在本实施例中,上述执行主体可以对异常部位在待建模医学影像中的位置进行投影变换,以得到异常部位在待建模对象的三维骨架模型中的位置。其中,投影变换可以将待建模对象的待建模医学影像中的异常部位的坐标变换为待建模对象的三维骨架模型中的坐标。
步骤306,基于异常部位在待建模对象的三维骨架模型中的位置,对待建模对象的三维骨架模型中的异常部位进行标注。
在本实施例中,上述执行主体可以首先基于异常部位在待建模对象的三维骨架模型中的位置,在待建模对象的三维骨架模型中查找到异常部位;然后对所查找到的异常部位进行标注,以使其与正常部位相区别。例如,上述执行主体可以将异常部位标注为与正常部位相异的颜色。
步骤307,从预先存储的异常相关信息集合中查询出异常部位的异常类别对应的异常相关信息。
在本实施例中,上述执行主体可以从预先存储的异常相关信息集合中查找出异常部位的异常类别对应的异常相关信息。其中,异常相关信息集合中的异常相关信息可以包括异常类别的介绍信息和改善信息。
步骤308,将异常部位的异常类别对应的相关改善信息与待建模 对象的三维骨架模型中的异常部位相关联。
在本实施例中,上述执行主体可以将异常部位的异常类别对应的异常相关信息与待建模对象的三维骨架模型中的异常部位相关联。例如,当不对待建模对象的三维骨架模型中的异常部位执行点击操作时,异常部位的异常类别对应的异常相关信息被隐藏。当对待建模对象的三维骨架模型中的异常部位执行点击操作时,异常部位的异常类别对应的异常相关信息被显示。
继续参见图4,图4是图3所示的用于生成三维模型的方法的一个应用场景的示意图。在图4所示的应用场景中,X光片拍摄设备410拍摄病人的X光片401,并发送至病人的手机420。病人打开手机420中的图像处理软件,选中X光片401,并点击上传按钮。此时,病人的手机420可以将X光片401发送至服务器430。在接收到X光片401之后,服务器430可以首先将X光片401输入至骨架参数生成模型402,得到病人的骨架参数403;之后基于病人的骨架参数403对标准三维骨架模型404进行调整,得到病人的三维骨架模型405;而后对三维骨架模型405进行分析,确定病人患有肩周炎;随后基于肩部在X光片401中的位置进行投影变换,得到肩部在三维骨架模型405中的位置,并对三维骨架模型405中的肩部进行标注;然后从肩部病变相关信息集合406中查找出肩周炎的病情信息和治疗信息407,并与三维骨架模型405中的肩部相关联;最后将三维骨架模型405发送给病人的手机420。这样,病人可以在手机420上查看自己的三维骨架模型405。当病人对三维骨架模型405中的肩部执行点击操作时,可以显示肩周炎的病情信息和治疗信息407。
从图3中可以看出,与图2对应的实施例相比,本实施例中的用于生成三维模型的方法的流程300增加了步骤304-308。由此,本实施例描述的方案能够在待建模对象的三维骨架模型中标注出异常部位,便于确定异常部位。同时,在异常部位处关联对应的异常相关信息,便于了解异常部位的异常类别,以改善异常部位。
进一步参考图5,作为对上述各图所示方法的实现,本申请提供了一种用于生成三维模型的装置的一个实施例,该装置实施例与图2 所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图5所示,本实施例的用于生成三维模型的装置500可以包括:获取单元501、生成单元502和调整单元503。其中,获取单元501,被配置成获取对待建模对象进行拍摄所得到的待建模医学影像;生成单元502,被配置成将待建模医学影像输入至预先训练的骨架参数生成模型,得到待建模对象的骨架参数,其中,骨架参数生成模型用于生成医学影像中的对象的骨架参数;调整单元503,被配置成基于待建模对象的骨架参数对标准三维骨架模型进行调整,得到待建模对象的三维骨架模型。
在本实施例中,用于生成三维模型的装置500中:获取单元501、生成单元502和调整单元503的具体处理及其所带来的技术效果可分别参考图2对应实施例中的步骤201、步骤202和步骤203的相关说明,在此不再赘述。
在本实施例的一些可选的实现方式中,骨架参数生成模型包括特征提取网络和拟合网络。
在本实施例的一些可选的实现方式中,生成单元502包括:提取子单元(图中未示出),被配置成将待建模医学影像输入至特征提取网络,得到待建模对象的骨架特征;拟合子单元(图中未示出),被配置成将待建模对象的骨架特征输入至拟合网络,得到待建模对象的骨架参数。
在本实施例的一些可选的实现方式中,用于生成三维模型的装置500还包括:确定单元(图中未示出),被配置成对待建模医学影像进行分析,确定待建模对象的异常部位在待建模医学影像中的位置;变换单元(图中未示出),被配置成对异常部位在待建模医学影像中的位置进行投影变换,得到异常部位在待建模对象的三维骨架模型中的位置;标注单元(图中未示出),被配置成基于异常部位在待建模对象的三维骨架模型中的位置,对待建模对象的三维骨架模型中的异常部位进行标注。
在本实施例的一些可选的实现方式中,确定单元进一步被配置成:将待建模医学影像输入至预先训练的分类网络,得到待建模对象的异 常部位在待建模医学影像中的位置和异常部位的异常类别。
在本实施例的一些可选的实现方式中,用于生成三维模型的装置500还包括:查询单元(图中未示出),被配置成从预先存储的异常相关信息集合中查询出异常部位的异常类别对应的异常相关信息,其中,异常相关信息集合中的异常相关信息包括异常类别的介绍信息和改善信息;关联单元(图中未示出),被配置成将异常部位的异常类别对应的异常相关信息与待建模对象的三维骨架模型中的异常部位相关联。
在本实施例的一些可选的实现方式中,骨架参数生成模型通过如下步骤训练得到:获取训练样本,其中,训练样本包括样本医学影像和对应的样本骨架参数;将样本医学影像作为输入,将样本骨架参数作为输出,训练得到骨架参数生成模型。
下面参考图6,其示出了适于用来实现本申请实施例的服务器(例如图1所示的服务器103)的计算机系统600的结构示意图。图6示出的服务器仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图6所示,计算机系统600包括中央处理单元(CPU)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储部分608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有系统600操作所需的各种程序和数据。CPU 601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。
以下部件连接至I/O接口605:包括键盘、鼠标等的输入部分606;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分607;包括硬盘等的存储部分608;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分609。通信部分609经由诸如因特网的网络执行通信处理。驱动器610也根据需要连接至I/O接口605。可拆卸介质611,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器610上,以便于从其上读出的计算机程序根据需要被安装入存储部分608。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以 被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分609从网络上被下载和安装,和/或从可拆卸介质611被安装。在该计算机程序被中央处理单元(CPU)601执行时,执行本申请的方法中限定的上述功能。需要说明的是,本申请所述的计算机可读介质可以是计算机可读信号介质或者计算机可读介质或者是上述两者的任意组合。计算机可读介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本申请中,计算机可读介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本申请中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请的操作的计算机程序代码,所述程序设计语言包括面向目标的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如”C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完 全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本申请各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本申请实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包括获取单元、生成单元和调整单元。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,获取单元还可以被描述为“获取对待建模对象进行拍摄所得到的待建模医学影像的单元”。
作为另一方面,本申请还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的服务器中所包含的;也可以是单独存在,而未装配入该服务器中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该服务器执行时,使得该服务器:获取对待建模对象进行拍摄所得到的待建模医学影像;将待建模医学影像输入至预先训练的骨架参数生成模型,得到待建模对象的骨架参数,其中,骨架参数生成模型用于生成医学影像中的对象的骨架参数;基于待建模对象的骨架参数对标准三维骨架模型进行调整,得到待建模对象的三维骨架模型。
以上描述仅为本申请的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本申请中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本申请中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。

Claims (16)

  1. 一种用于生成三维模型的方法,包括:
    获取对待建模对象进行拍摄所得到的待建模医学影像;
    将所述待建模医学影像输入至预先训练的骨架参数生成模型,得到所述待建模对象的骨架参数,其中,所述骨架参数生成模型用于生成医学影像中的对象的骨架参数;以及
    基于所述待建模对象的骨架参数对标准三维骨架模型进行调整,得到所述待建模对象的三维骨架模型。
  2. 根据权利要求1所述的方法,其中,所述骨架参数生成模型包括特征提取网络和拟合网络。
  3. 根据权利要求2所述的方法,其中,所述将所述待建模医学影像输入至预先训练的骨架参数生成模型,得到所述待建模对象的骨架参数,包括:
    将所述待建模医学影像输入至所述特征提取网络,得到所述待建模对象的骨架特征;以及
    将所述待建模对象的骨架特征输入至所述拟合网络,得到所述待建模对象的骨架参数。
  4. 根据权利要求1-3任一所述的方法,其中,所述方法还包括:
    对所述待建模医学影像进行分析,确定所述待建模对象的异常部位在所述待建模医学影像中的位置;
    对所述异常部位在所述待建模医学影像中的位置进行投影变换,得到所述异常部位在所述待建模对象的三维骨架模型中的位置;以及
    基于所述异常部位在所述待建模对象的三维骨架模型中的位置,对所述待建模对象的三维骨架模型中的所述异常部位进行标注。
  5. 根据权利要求4所述的方法,其中,所述对所述待建模医学影 像进行分析,确定所述待建模对象的异常部位在所述待建模医学影像中的位置,包括:
    将所述待建模医学影像输入至预先训练的分类网络,得到所述待建模对象的异常部位在所述待建模医学影像中的位置和所述异常部位的异常类别。
  6. 根据权利要求5所述的方法,其中,在所述基于所述异常部位在所述待建模对象的三维骨架模型中的位置,对所述待建模对象的三维骨架模型中的所述异常部位进行标注之后,还包括:
    从预先存储的异常相关信息集合中查询出所述异常部位的异常类别对应的异常相关信息,其中,所述异常相关信息集合中的异常相关信息包括异常类别的介绍信息和改善信息;以及
    将所述异常部位的异常类别对应的异常相关信息与所述待建模对象的三维骨架模型中的所述异常部位相关联。
  7. 根据权利要求1-3任一所述的方法,其中,所述骨架参数生成模型通过如下步骤训练得到:
    获取训练样本,其中,所述训练样本包括样本医学影像和对应的样本骨架参数;以及
    将所述样本医学影像作为输入,将所述样本骨架参数作为输出,训练得到所述骨架参数生成模型。
  8. 一种用于生成三维模型的装置,包括:
    获取单元,被配置成获取对待建模对象进行拍摄所得到的待建模医学影像;
    生成单元,被配置成将所述待建模医学影像输入至预先训练的骨架参数生成模型,得到所述待建模对象的骨架参数,其中,所述骨架参数生成模型用于生成医学影像中的对象的骨架参数;以及
    调整单元,被配置成基于所述待建模对象的骨架参数对标准三维骨架模型进行调整,得到所述待建模对象的三维骨架模型。
  9. 根据权利要求8所述的装置,其中,所述骨架参数生成模型包括特征提取网络和拟合网络。
  10. 根据权利要求9所述的装置,其中,所述生成单元包括:
    提取子单元,被配置成将所述待建模医学影像输入至所述特征提取网络,得到所述待建模对象的骨架特征;以及
    拟合子单元,被配置成将所述待建模对象的骨架特征输入至所述拟合网络,得到所述待建模对象的骨架参数。
  11. 根据权利要求8-10任一所述的装置,其中,所述装置还包括:
    确定单元,被配置成对所述待建模医学影像进行分析,确定所述待建模对象的异常部位在所述待建模医学影像中的位置;
    变换单元,被配置成对所述异常部位在所述待建模医学影像中的位置进行投影变换,得到所述异常部位在所述待建模对象的三维骨架模型中的位置;以及
    标注单元,被配置成基于所述异常部位在所述待建模对象的三维骨架模型中的位置,对所述待建模对象的三维骨架模型中的所述异常部位进行标注。
  12. 根据权利要求11所述的装置,其中,所述确定单元进一步被配置成:
    将所述待建模医学影像输入至预先训练的分类网络,得到所述待建模对象的异常部位在所述待建模医学影像中的位置和所述异常部位的异常类别。
  13. 根据权利要求12所述的装置,其中,所述装置还包括:
    查询单元,被配置成从预先存储的异常相关信息集合中查询出所述异常部位的异常类别对应的异常相关信息,其中,所述异常相关信息集合中的异常相关信息包括异常类别的介绍信息和改善信息;以及
    关联单元,被配置成将所述异常部位的异常类别对应的异常相关信息与所述待建模对象的三维骨架模型中的所述异常部位相关联。
  14. 根据权利要求8-10任一所述的装置,其中,所述骨架参数生成模型通过如下步骤训练得到:
    获取训练样本,其中,所述训练样本包括样本医学影像和对应的样本骨架参数;以及
    将所述样本医学影像作为输入,将所述样本骨架参数作为输出,训练得到所述骨架参数生成模型。
  15. 一种服务器,包括:
    一个或多个处理器;
    存储装置,其上存储有一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-7中任一所述的方法。
  16. 一种计算机可读介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求1-7中任一所述的方法。
PCT/CN2019/113902 2019-03-07 2019-10-29 用于生成三维模型的方法和装置 WO2020177348A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910171928.9 2019-03-07
CN201910171928.9A CN109887077B (zh) 2019-03-07 2019-03-07 用于生成三维模型的方法和装置

Publications (1)

Publication Number Publication Date
WO2020177348A1 true WO2020177348A1 (zh) 2020-09-10

Family

ID=66931191

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/113902 WO2020177348A1 (zh) 2019-03-07 2019-10-29 用于生成三维模型的方法和装置

Country Status (2)

Country Link
CN (1) CN109887077B (zh)
WO (1) WO2020177348A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734901A (zh) * 2020-12-01 2021-04-30 深圳市人工智能与机器人研究院 一种3d说明书生成方法及相关设备
CN113012282A (zh) * 2021-03-31 2021-06-22 深圳市慧鲤科技有限公司 三维人体重建方法、装置、设备及存储介质
CN117437365A (zh) * 2023-12-20 2024-01-23 中国科学院深圳先进技术研究院 医学三维模型的生成方法、装置、电子设备及存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109887077B (zh) * 2019-03-07 2022-06-03 百度在线网络技术(北京)有限公司 用于生成三维模型的方法和装置
US11576794B2 (en) 2019-07-02 2023-02-14 Wuhan United Imaging Healthcare Co., Ltd. Systems and methods for orthosis design
CN110327146A (zh) * 2019-07-02 2019-10-15 武汉联影医疗科技有限公司 一种矫形器设计方法、装置和服务器
CN111882544B (zh) * 2020-07-30 2024-05-14 深圳平安智慧医健科技有限公司 基于人工智能的医学影像显示方法及相关装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4474546B2 (ja) * 2004-10-05 2010-06-09 国立大学法人東京農工大学 顔形状モデリングシステムおよび顔形状モデリング方法
CN105608737A (zh) * 2016-02-01 2016-05-25 成都通甲优博科技有限责任公司 一种基于机器学习的人体足部三维重建方法
CN107808377A (zh) * 2017-10-31 2018-03-16 北京青燕祥云科技有限公司 一种肺叶中病灶的定位方法及装置
CN108876893A (zh) * 2017-12-14 2018-11-23 北京旷视科技有限公司 三维人脸重建的方法、装置、系统及计算机存储介质
CN109308488A (zh) * 2018-08-30 2019-02-05 深圳大学 乳腺超声图像处理装置、方法、计算机设备及存储介质
CN109887077A (zh) * 2019-03-07 2019-06-14 百度在线网络技术(北京)有限公司 用于生成三维模型的方法和装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247073B (zh) * 2013-04-18 2016-08-10 北京师范大学 基于树状结构的三维脑血管模型构造方法
US9984311B2 (en) * 2015-04-11 2018-05-29 Peter Yim Method and system for image segmentation using a directed graph
US10169871B2 (en) * 2016-01-21 2019-01-01 Elekta, Inc. Systems and methods for segmentation of intra-patient medical images
CN105963005A (zh) * 2016-04-25 2016-09-28 华南理工大学 一种生产漏斗胸矫形板的方法
CN107993277B (zh) * 2017-11-28 2019-12-17 河海大学常州校区 基于先验知识的损伤部位人造骨骼修补模型重建方法
CN108053283B (zh) * 2017-12-15 2022-01-04 北京中睿华信信息技术有限公司 一种基于3d建模的服装定制方法
CN108460364B (zh) * 2018-03-27 2022-03-11 百度在线网络技术(北京)有限公司 用于生成信息的方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4474546B2 (ja) * 2004-10-05 2010-06-09 国立大学法人東京農工大学 顔形状モデリングシステムおよび顔形状モデリング方法
CN105608737A (zh) * 2016-02-01 2016-05-25 成都通甲优博科技有限责任公司 一种基于机器学习的人体足部三维重建方法
CN107808377A (zh) * 2017-10-31 2018-03-16 北京青燕祥云科技有限公司 一种肺叶中病灶的定位方法及装置
CN108876893A (zh) * 2017-12-14 2018-11-23 北京旷视科技有限公司 三维人脸重建的方法、装置、系统及计算机存储介质
CN109308488A (zh) * 2018-08-30 2019-02-05 深圳大学 乳腺超声图像处理装置、方法、计算机设备及存储介质
CN109887077A (zh) * 2019-03-07 2019-06-14 百度在线网络技术(北京)有限公司 用于生成三维模型的方法和装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734901A (zh) * 2020-12-01 2021-04-30 深圳市人工智能与机器人研究院 一种3d说明书生成方法及相关设备
CN112734901B (zh) * 2020-12-01 2023-08-18 深圳市人工智能与机器人研究院 一种3d说明书生成方法及相关设备
CN113012282A (zh) * 2021-03-31 2021-06-22 深圳市慧鲤科技有限公司 三维人体重建方法、装置、设备及存储介质
CN117437365A (zh) * 2023-12-20 2024-01-23 中国科学院深圳先进技术研究院 医学三维模型的生成方法、装置、电子设备及存储介质
CN117437365B (zh) * 2023-12-20 2024-04-12 中国科学院深圳先进技术研究院 医学三维模型的生成方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN109887077A (zh) 2019-06-14
CN109887077B (zh) 2022-06-03

Similar Documents

Publication Publication Date Title
WO2020177348A1 (zh) 用于生成三维模型的方法和装置
CN109035234B (zh) 一种结节检测方法、装置和存储介质
WO2020006961A1 (zh) 用于提取图像的方法和装置
CN107403425A (zh) 从图像自动地生成放射学报告并自动排除没有发现的图像
US20200234444A1 (en) Systems and methods for the analysis of skin conditions
JP2016539411A (ja) 医療情報用の進化型コンテキスト臨床データエンジン
US20220058821A1 (en) Medical image processing method, apparatus, and device, medium, and endoscope
JP2010075403A (ja) 情報処理装置およびその制御方法、データ処理システム
CN108388889B (zh) 用于分析人脸图像的方法和装置
US20200327986A1 (en) Integrated predictive analysis apparatus for interactive telehealth and operating method therefor
US11574717B2 (en) Medical document creation support apparatus, medical document creation support method, and medical document creation support program
EP3574837A1 (en) Medical information virtual reality server system, medical information virtual reality program, medical information virtual reality system, method of creating medical information virtual reality data, and medical information virtual reality data
CN113197665A (zh) 一种基于虚拟现实的微创外科手术模拟方法、系统
CN113610826A (zh) 穿刺定位方法及装置,电子设备及存储介质
US11200671B2 (en) Reference image guided object detection in medical image processing
CN114223040A (zh) 在成像点处用于对使成像工作流程效率更高的选择的即时建议的设备
Advincula et al. Development and future trends in the application of visualization toolkit (VTK): the case for medical image 3D reconstruction
Benbelkacem et al. Lung infection region quantification, recognition, and virtual reality rendering of CT scan of COVID-19
CN112331329A (zh) 以个人装置进行手骨骨龄即时判读的系统及方法
CN111144506B (zh) 基于超声图像的肝包虫识别方法、存储介质及超声设备
Karner et al. Single-shot deep volumetric regression for mobile medical augmented reality
CN114048738A (zh) 基于症状描述的数据采集方法、装置、计算设备、介质
US20220172824A1 (en) Snip-triggered digital image report generation
CN112397194B (zh) 用于生成患者病情归因解释模型的方法、装置和电子设备
KR20220060746A (ko) 의료 영상 생성 방법 및 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19917956

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.02.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19917956

Country of ref document: EP

Kind code of ref document: A1