CN113470003A - Tool determination method and device, computer readable storage medium and processor - Google Patents
Tool determination method and device, computer readable storage medium and processor Download PDFInfo
- Publication number
- CN113470003A CN113470003A CN202110832062.9A CN202110832062A CN113470003A CN 113470003 A CN113470003 A CN 113470003A CN 202110832062 A CN202110832062 A CN 202110832062A CN 113470003 A CN113470003 A CN 113470003A
- Authority
- CN
- China
- Prior art keywords
- tool
- image data
- simplified form
- target area
- dimensional image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000003860 storage Methods 0.000 title claims abstract description 18
- 238000010801 machine learning Methods 0.000 claims abstract description 32
- 238000012549 training Methods 0.000 claims description 10
- 238000005516 engineering process Methods 0.000 abstract description 10
- 230000008569 process Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 206010028851 Necrosis Diseases 0.000 description 6
- 230000007547 defect Effects 0.000 description 6
- 230000017074 necrotic cell death Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 238000005259 measurement Methods 0.000 description 5
- 230000008439 repair process Effects 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 210000003484 anatomy Anatomy 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000009776 industrial production Methods 0.000 description 2
- 238000002595 magnetic resonance imaging Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000000691 measurement method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001338 necrotic effect Effects 0.000 description 2
- 210000000689 upper leg Anatomy 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 208000019155 Radiation injury Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 238000002594 fluoroscopy Methods 0.000 description 1
- 238000005242 forging Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000000227 grinding Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000003801 milling Methods 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a tool determination method, a tool determination device, a computer readable storage medium and a processor. Wherein, the method comprises the following steps: acquiring three-dimensional image data of a target area; determining a simplified form of a target area corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form based on an identification model obtained by machine learning; and determining the tool to be used according to the simplified form and the tool parameters corresponding to the simplified form. The invention solves the technical problem that the target area cannot be effectively positioned and a proper tool cannot be matched for the target area in the related technology.
Description
Technical Field
The invention relates to the technical field of tool intellectualization, in particular to a tool determination method, a tool determination device, a computer readable storage medium and a processor.
Background
In different fields, for example, in the medical field, when the femoral head necrosis is treated, identification of the necrotic area is a key process, and most of the processes rely on preoperative image analysis of the operator and repeated X-ray fluoroscopy during the operation for repeated positioning, and the process is accompanied by repeated penetration and radiation injury, and the morphology of the femoral head necrotic area is not accurately expressed in an effective way, and even an appropriate tool cannot be selected for processing. For example, in an industrial production process, for a large workpiece, defects such as holes are generally detected by a manual quality inspection method, and for example, when there are many defects such as holes or the defects are difficult to be detected by human eyes, it is impossible to detect one by one and select a proper tool to repair the defect. However, similar problems exist in other fields such as die repair, and thus more precise positioning and intelligence of operating tools are needed.
In view of the above-mentioned problem that the target area cannot be effectively located and a suitable tool cannot be matched for the target area in the related art, an effective solution has not been proposed at present.
Disclosure of Invention
The embodiment of the invention provides a tool determination method, a tool determination device, a computer readable storage medium and a processor, which are used for at least solving the technical problem that a target area cannot be effectively positioned and a proper tool cannot be matched for the target area in the related technology.
According to an aspect of an embodiment of the present invention, there is provided a tool determination method including: acquiring three-dimensional image data of a target area; determining a simplified form of the target region corresponding to the three-dimensional image data and a tool parameter corresponding to the simplified form based on an identification model obtained by machine learning; and determining a tool to be used according to the simplified form and the tool parameter corresponding to the simplified form.
Optionally, acquiring three-dimensional image data of the target region includes: scanning the target area by using scanning equipment to obtain two-dimensional multilayer image data of the target area; performing three-dimensional reconstruction according to the two-dimensional multilayer image data to obtain a three-dimensional digital model of the target area; and obtaining the three-dimensional image data of the target area according to the three-dimensional digital model.
Optionally, determining a simplified form of the target region corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form based on a recognition model obtained by machine learning includes: inputting the three-dimensional image data into the recognition model, and generating a simplified form of the target region corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form, wherein the recognition model is obtained by using multiple groups of data through machine learning training, and each group of data in the multiple groups of data comprises: the method comprises the following steps of obtaining three-dimensional image data of different types, simplified forms of the target area corresponding to the three-dimensional image data and tool parameters corresponding to the simplified forms.
Optionally, after determining the tool to be used according to the simplified form and the tool parameter corresponding to the simplified form, the method further includes: and adjusting the tool parameters, and re-determining the tool to be used.
According to another aspect of the embodiments of the present invention, there is also provided a tool determination apparatus, including: the acquisition module is used for acquiring three-dimensional image data of a target area; a first determination module, configured to determine, based on an identification model obtained by machine learning, a simplified form of the target region corresponding to the three-dimensional image data and a tool parameter corresponding to the simplified form; and the second determining module is used for determining the tool to be used according to the simplified form and the tool parameter corresponding to the simplified form.
Optionally, the obtaining module includes: the scanning unit is used for scanning the target area by utilizing scanning equipment to obtain two-dimensional multilayer image data of the target area; the reconstruction unit is used for performing three-dimensional reconstruction according to the two-dimensional multilayer image data to obtain a three-dimensional digital model of the target area; and the obtaining unit is used for obtaining the three-dimensional image data of the target area according to the three-dimensional digital model.
Optionally, the first determining module includes: a generating unit, configured to input the three-dimensional image data into the recognition model, and generate a simplified form of the target region corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form, where the recognition model is obtained by machine learning training using multiple sets of data, and each set of data in the multiple sets of data includes: the method comprises the following steps of obtaining three-dimensional image data of different types, simplified forms of the target area corresponding to the three-dimensional image data and tool parameters corresponding to the simplified forms.
Optionally, the apparatus further comprises: and the adjusting module is used for adjusting the tool parameters and re-determining the tool to be used after determining the tool to be used according to the simplified form and the tool parameters corresponding to the simplified form.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, where the computer-readable storage medium includes a stored program, and when the program runs, the apparatus where the computer-readable storage medium is located is controlled to execute the tool determination method described in any one of the above.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program, where the program executes a method for determining a tool according to any one of the above.
In the embodiment of the invention, three-dimensional image data of a target area is acquired; determining a simplified form of a target area corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form based on an identification model obtained by machine learning; the tool to be used is determined according to the simplified form and the tool parameters corresponding to the simplified form, the three-dimensional image data of the target area is identified through the identification model, the simplified form of the target area and the tool parameters corresponding to the simplified form are obtained, the purpose of matching the tool after the target area is located is achieved, the technical effects of more accurately locating the target area and quickly matching the corresponding tool are achieved, and the technical problem that the target area cannot be effectively located and the appropriate tool cannot be matched for the target area in the related technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a method of tool determination according to an embodiment of the invention;
fig. 2 is a schematic diagram of a determination device of a tool according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
In accordance with an embodiment of the present invention, there is provided an embodiment of a method for tool determination, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than presented herein.
Fig. 1 is a flowchart of a tool determination method according to an embodiment of the present invention, as shown in fig. 1, the method including the steps of:
step S102, acquiring three-dimensional image data of a target area;
step S104, determining a simplified form of a target area corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form based on the recognition model obtained by machine learning;
and step S106, determining a tool to be used according to the simplified form and the tool parameter corresponding to the simplified form.
The simplified form is also referred to as a basic voxel, wherein the simplified form includes, but is not limited to, a square, a rectangle, a regular triangle, an inverted triangle, a rhombus, a sphere, a convex arc, a straight line, a concave crescent, an irregular curve, a combination of shapes, and the like. The tool to be used may be different according to different application scenarios, for example, the tool may be a medical instrument tool in the medical field, or a related repair tool in the industrial production field, and the like, and details are not repeated here.
Through the steps, the three-dimensional image data of the target area can be acquired; determining a simplified form of a target area corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form based on an identification model obtained by machine learning; the tool to be used is determined according to the simplified form and the tool parameters corresponding to the simplified form, the three-dimensional image data of the target area is identified through the identification model, the simplified form of the target area and the tool parameters corresponding to the simplified form are obtained, the purpose of matching the tool after the target area is located is achieved, the technical effects of more accurately locating the target area and quickly matching the corresponding tool are achieved, and the technical problem that the target area cannot be effectively located and the appropriate tool cannot be matched for the target area in the related technology is solved.
Optionally, acquiring three-dimensional image data of the target region includes: scanning the target area by using scanning equipment to obtain two-dimensional multilayer image data of the target area; performing three-dimensional reconstruction according to the two-dimensional multilayer image data to obtain a three-dimensional digital model of the target area; and obtaining three-dimensional image data of the target area according to the three-dimensional digital model.
The scanning device includes, but is not limited to, a CT scanning device, an MRI magnetic resonance imaging device, and the like.
In an alternative embodiment, the scanning device is used to scan the geometric structure and appearance data of the target region to obtain two-dimensional multi-layer image data, and the two-dimensional multi-layer image data is three-dimensionally reconstructed to generate a three-dimensional digital model, wherein the three-dimensional digital model includes three-dimensional image data of the target region. The geometric configuration includes, but is not limited to, the shape of the object or environment, and the like, and the appearance data includes, but is not limited to, color, surface albedo, and the like. By the embodiment, the three-dimensional image data of the target area can be acquired in a multi-angle and all-around manner.
Optionally, the scanning device in the above embodiment adopts a composite three-dimensional non-contact measurement technique combining a structured light technique, a phase measurement technique, and a computer vision technique, and the scanning device can measure one surface simultaneously, and during measurement, the grating projection device projects a plurality of pieces of structured light with specific codes onto an object to be measured, two cameras forming a certain included angle synchronously acquire corresponding images, then decodes and phase-calculates the images, and utilizes a matching technique and a triangle measurement principle to solve three-dimensional coordinates of pixels in a common view area of the two cameras.
The scanning device at least comprises a laser emitter, a receiver, a time counter, a motor-controlled rotatable filter, a control circuit board, a microcomputer, a CCD machine, a software program and the like.
Furthermore, the scanning device may also be a contact three-dimensional scanner including, but not limited to, a three-coordinate measuring machine, a milling measuring machine, etc., or a non-contact three-dimensional scanner including, but not limited to, a laser scanner, a photographic scanner, a CT scanner, etc.
It should be noted that the above embodiments of the present invention can be applied to dimension measurement of a hot red forging, evaluation of damage effect of an explosive, detection of a hole in a workpiece, selection of an automotive interior part and a rotary blade, and the like.
Optionally, determining a simplified form of the target region corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form based on the recognition model obtained by machine learning, including: inputting the three-dimensional image data into a recognition model, and generating a simplified form of a target area corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form, wherein the recognition model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: the three-dimensional image data of different types, the simplified form of the target area corresponding to the three-dimensional image data and the tool parameter corresponding to the simplified form.
In an alternative embodiment, before determining the simplified form of the target region corresponding to the three-dimensional image data and the tool parameters corresponding to the simplified form based on the recognition model obtained by machine learning, the method further includes: constructing an identification model of a target area; and obtaining a final recognition model by using a plurality of groups of data through machine learning training, wherein each group of data in the plurality of groups of data comprises three-dimensional image data of different types, simplified forms of target areas corresponding to the three-dimensional image data and tool parameters corresponding to the simplified forms. The model can be identified more accurately and more suitably by the embodiment.
In an alternative embodiment, after acquiring the three-dimensional image data of the target region, the three-dimensional image data may be input into a recognition model, and the recognition model processes the three-dimensional image data to generate a simplified form of the target region and tool parameters corresponding to the simplified form. The simplified form of the target region may be one or more forms, and the tool parameters corresponding to different simplified forms may vary depending on the form.
According to the embodiment, the target area can be divided into one or more simplified forms and the tool parameters corresponding to each simplified form can be accurately obtained through processing the three-dimensional image data based on the recognition model obtained through machine learning.
Optionally, after determining the tool to be used according to the simplified form and the tool parameter corresponding to the simplified form, the method further includes: and adjusting the tool parameters and re-determining the tool to be used.
In order to meet the actual application requirements, after the tool to be used is determined according to the simplified form and the tool parameters corresponding to the simplified form, the tool parameters can be further adjusted, and the tool to be used after the tool parameters are adjusted is obtained again. Through the embodiment, the tool to be used is selected more flexibly, and the requirements of complex and changeable application scenes can be met.
In order to better understand the process of the tool determination method, the following describes the tool determination method flow with reference to an alternative embodiment, but the invention is not limited to the technical solution of the embodiment of the present invention.
In the following, alternative embodiments of the present invention will be described in detail with reference to different application scenarios.
Application scenario 1:
in an alternative embodiment, the following implementation steps may be specifically adopted:
step 1-1: the navigation system is connected with the 3D scanning system, wherein the navigation equipment can accurately correspond image data before or during operation of a patient to the anatomical structure of the patient on an operating bed, track the surgical instrument and update and display the position of the surgical instrument on the image of the patient in real time in the form of a virtual probe, so that a doctor can clearly know the position of the surgical instrument relative to the anatomical structure of the patient, and the surgical operation is quicker, more accurate and safer.
Step 1-2: the method comprises the steps of obtaining a proximal femur image of a patient through a 3D scanning system, wherein a three-dimensional scanner used by the 3D scanning system performs non-contact automatic measurement on the surface contour of a three-dimensional human body by utilizing an optical measurement technology, a computer technology, an image processing technology, a digital signal processing technology and the like. In addition, the human body whole body (half body) scanning system fully utilizes the advantages of rapidness of optical three-dimensional scanning and harmlessness of white light to a human body, carries out multi-angle and multi-direction instant scanning on the human body whole body or half body within 3 to 5 seconds, and then realizes automatic splicing through computer software to obtain accurate and complete human body point cloud data, thereby presenting a three-dimensional bone structure.
Step 1-3: and performing three-dimensional reconstruction by using an AI algorithm, obtaining a 3D reconstruction result of the femoral head necrosis area and the center of the femoral head necrosis area, and simultaneously importing the three-dimensional reconstruction result into a navigation system. The image data obtained by 3D scanning is analyzed by Machine Learning (ML) or artificial intelligence, and the morphology of the proximal femur and the morphology of the femoral head necrosis region are obtained.
Step 1-4: the morphology of the femoral head necrosis region is divided into one or more simplified morphologies in the navigation system.
Step 1-5: the AI analysis will output the following results: simplified morphology of femoral head necrosis region and rotating blade parameters corresponding to the simplified morphology; wherein the rotating blade parameters include, but are not limited to, blade edge length, angle, arc radius, and the like.
Step 1-6: and selecting a proper rotary blade according to the result output by the AI.
It should be noted that the rotating blade is provided with at least two blades, and is provided with an inner core sliding in the sleeve, and the inner core drives the at least two blades to extend and retract.
Application scenario 2:
in another alternative embodiment, the following implementation steps may be specifically adopted:
step 2-1: the target workpiece is placed at the detection position, and then a three-dimensional scanner is used for scanning a preset area (equivalent to the target area) of the workpiece to obtain three-dimensional image data of the workpiece.
Step 2-2: and performing three-dimensional reconstruction on the workpiece according to the three-dimensional image data of the workpiece, and determining defect areas such as holes of the workpiece.
Step 2-3: based on the AI algorithm utilizing big data, the defect regions such as holes of the workpiece are divided into a plurality of different simplified forms, and the corresponding repair parameters of each simplified form are obtained. Repair parameters herein include, but are not limited to, workpiece material, hole size, etc.
Step 2-4: and selecting a proper repairing tool for repairing defective areas such as holes of the workpiece according to the processing result of the AI algorithm of the big data. The repairing tool includes, but is not limited to, a filling material, a grinding tool, a measuring ruler, etc.
According to the implementation modes of different application scenarios, the three-dimensional image data of the target area can be identified through the identification model, the simplified form of the target area and the tool parameters corresponding to the simplified form are obtained, and the purpose of matching tools for the target area after the target area is located is achieved, so that the technical effects of more accurately locating the target area and quickly matching corresponding tools are achieved, and the technical problem that the target area cannot be effectively located and suitable tools cannot be matched for the target area in the related technology is solved.
It should be noted that the above-mentioned embodiments described in the present application can be applied to various applications, and are not limited to the above-mentioned embodiments, and are not described in detail herein.
Example 2
According to another aspect of the embodiments of the present invention, there is also provided a tool determination apparatus, fig. 2 is a schematic view of the tool determination apparatus according to the embodiments of the present invention, and as shown in fig. 2, the tool determination apparatus includes: an acquisition module 22, a first determination module 24, and a second determination module 26. The determination means of the tool will be described in detail below.
An obtaining module 22, configured to obtain three-dimensional image data of a target area; a first determining module 24, connected to the obtaining module 22, for determining a simplified form of the target region corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form based on the recognition model obtained by machine learning; and a second determining module 26, connected to the first determining module 24, for determining a tool to be used according to the simplified form and the tool parameter corresponding to the simplified form.
It should be noted that the above modules may be implemented by software or hardware, for example, for the latter, the following may be implemented: the modules can be located in the same processor; and/or the modules are located in different processors in any combination.
In the above embodiment, the device for determining a tool may identify the three-dimensional image data of the target area through the identification model, obtain the simplified form of the target area and the tool parameters corresponding to the simplified form, and achieve the purpose of matching the tool for the target area after the target area is located, thereby achieving the technical effects of locating the target area more accurately and matching the corresponding tool quickly, and further solving the technical problems that the target area cannot be located effectively and a suitable tool cannot be matched for the target area in the related art.
It should be noted here that the above-mentioned obtaining module 22, the first determining module 24 and the second determining module 26 correspond to steps S102 to S106 in embodiment 1, and the above-mentioned modules are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to what is disclosed in embodiment 1 above.
Optionally, the obtaining module 22 includes: the scanning unit is used for scanning the target area by utilizing scanning equipment to obtain two-dimensional multilayer image data of the target area; the reconstruction unit is used for performing three-dimensional reconstruction according to the two-dimensional multilayer image data to obtain a three-dimensional digital model of the target area; and the obtaining unit is used for obtaining the three-dimensional image data of the target area according to the three-dimensional digital model.
Optionally, the first determining module 24 includes: the generating unit is used for inputting the three-dimensional image data into the recognition model, and generating a simplified form of a target area corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form, wherein the recognition model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: the three-dimensional image data of different types, the simplified form of the target area corresponding to the three-dimensional image data and the tool parameter corresponding to the simplified form.
Optionally, the apparatus further comprises: and the adjusting module is used for adjusting the tool parameters and re-determining the tool to be used after determining the tool to be used according to the simplified form and the tool parameters corresponding to the simplified form.
Example 3
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium including a stored program, wherein when the program runs, the apparatus on which the computer-readable storage medium is located is controlled to execute the method for determining a tool in any one of the above.
Optionally, in this embodiment, the computer-readable storage medium may be located in any one of a group of computer terminals in a computer network and/or in any one of a group of mobile terminals, and the computer-readable storage medium includes a stored program.
Optionally, the program when executed controls an apparatus in which the computer-readable storage medium is located to perform the following functions: acquiring three-dimensional image data of a target area; determining a simplified form of a target area corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form based on an identification model obtained by machine learning; and determining the tool to be used according to the simplified form and the tool parameters corresponding to the simplified form.
Optionally, acquiring three-dimensional image data of the target region includes: scanning the target area by using scanning equipment to obtain two-dimensional multilayer image data of the target area; performing three-dimensional reconstruction according to the two-dimensional multilayer image data to obtain a three-dimensional digital model of the target area; and obtaining three-dimensional image data of the target area according to the three-dimensional digital model.
Optionally, determining a simplified form of the target region corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form based on the recognition model obtained by machine learning, including: inputting the three-dimensional image data into a recognition model, and generating a simplified form of a target area corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form, wherein the recognition model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: the three-dimensional image data of different types, the simplified form of the target area corresponding to the three-dimensional image data and the tool parameter corresponding to the simplified form.
Optionally, after determining the tool to be used according to the simplified form and the tool parameter corresponding to the simplified form, the method further includes: and adjusting the tool parameters and re-determining the tool to be used.
Example 4
According to another aspect of the embodiments of the present invention, there is also provided a processor for executing a program, where the program executes a method for determining a tool in any one of the above.
The embodiment of the invention provides equipment, which comprises a processor, a memory and a program which is stored on the memory and can run on the processor, wherein the processor executes the program and realizes the following steps: acquiring three-dimensional image data of a target area; determining a simplified form of a target area corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form based on an identification model obtained by machine learning; and determining the tool to be used according to the simplified form and the tool parameters corresponding to the simplified form.
Optionally, acquiring three-dimensional image data of the target region includes: scanning the target area by using scanning equipment to obtain two-dimensional multilayer image data of the target area; performing three-dimensional reconstruction according to the two-dimensional multilayer image data to obtain a three-dimensional digital model of the target area; and obtaining three-dimensional image data of the target area according to the three-dimensional digital model.
Optionally, determining a simplified form of the target region corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form based on the recognition model obtained by machine learning, including: inputting the three-dimensional image data into a recognition model, and generating a simplified form of a target area corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form, wherein the recognition model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: the three-dimensional image data of different types, the simplified form of the target area corresponding to the three-dimensional image data and the tool parameter corresponding to the simplified form.
Optionally, after determining the tool to be used according to the simplified form and the tool parameter corresponding to the simplified form, the method further includes: and adjusting the tool parameters and re-determining the tool to be used.
The invention also provides a computer program product adapted to perform a program for initializing the following method steps when executed on a data processing device: acquiring three-dimensional image data of a target area; determining a simplified form of a target area corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form based on an identification model obtained by machine learning; and determining the tool to be used according to the simplified form and the tool parameters corresponding to the simplified form.
Optionally, acquiring three-dimensional image data of the target region includes: scanning the target area by using scanning equipment to obtain two-dimensional multilayer image data of the target area; performing three-dimensional reconstruction according to the two-dimensional multilayer image data to obtain a three-dimensional digital model of the target area; and obtaining three-dimensional image data of the target area according to the three-dimensional digital model.
Optionally, determining a simplified form of the target region corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form based on the recognition model obtained by machine learning, including: inputting the three-dimensional image data into a recognition model, and generating a simplified form of a target area corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form, wherein the recognition model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: the three-dimensional image data of different types, the simplified form of the target area corresponding to the three-dimensional image data and the tool parameter corresponding to the simplified form.
Optionally, after determining the tool to be used according to the simplified form and the tool parameter corresponding to the simplified form, the method further includes: and adjusting the tool parameters and re-determining the tool to be used.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (10)
1. A method of tool determination, comprising:
acquiring three-dimensional image data of a target area;
determining a simplified form of the target region corresponding to the three-dimensional image data and a tool parameter corresponding to the simplified form based on an identification model obtained by machine learning;
and determining a tool to be used according to the simplified form and the tool parameter corresponding to the simplified form.
2. The method of claim 1, wherein acquiring three-dimensional image data of a target region comprises:
scanning the target area by using scanning equipment to obtain two-dimensional multilayer image data of the target area;
performing three-dimensional reconstruction according to the two-dimensional multilayer image data to obtain a three-dimensional digital model of the target area;
and obtaining the three-dimensional image data of the target area according to the three-dimensional digital model.
3. The method of claim 1, wherein determining a simplified morphology of the target region corresponding to the three-dimensional image data and tool parameters corresponding to the simplified morphology based on a recognition model obtained by machine learning comprises:
inputting the three-dimensional image data into the recognition model, and generating a simplified form of the target region corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form, wherein the recognition model is obtained by using multiple groups of data through machine learning training, and each group of data in the multiple groups of data comprises: the method comprises the following steps of obtaining three-dimensional image data of different types, simplified forms of the target area corresponding to the three-dimensional image data and tool parameters corresponding to the simplified forms.
4. The method according to any one of claims 1 to 3, further comprising, after determining the tool to be used according to the simplified form and the tool parameters corresponding to the simplified form:
and adjusting the tool parameters, and re-determining the tool to be used.
5. An apparatus for tool determination, comprising:
the acquisition module is used for acquiring three-dimensional image data of a target area;
a first determination module, configured to determine, based on an identification model obtained by machine learning, a simplified form of the target region corresponding to the three-dimensional image data and a tool parameter corresponding to the simplified form;
and the second determining module is used for determining the tool to be used according to the simplified form and the tool parameter corresponding to the simplified form.
6. The apparatus of claim 5, wherein the obtaining module comprises:
the scanning unit is used for scanning the target area by utilizing scanning equipment to obtain two-dimensional multilayer image data of the target area;
the reconstruction unit is used for performing three-dimensional reconstruction according to the two-dimensional multilayer image data to obtain a three-dimensional digital model of the target area;
and the obtaining unit is used for obtaining the three-dimensional image data of the target area according to the three-dimensional digital model.
7. The apparatus of claim 5, wherein the first determining module comprises:
a generating unit, configured to input the three-dimensional image data into the recognition model, and generate a simplified form of the target region corresponding to the three-dimensional image data and tool parameters corresponding to the simplified form, where the recognition model is obtained by machine learning training using multiple sets of data, and each set of data in the multiple sets of data includes: the method comprises the following steps of obtaining three-dimensional image data of different types, simplified forms of the target area corresponding to the three-dimensional image data and tool parameters corresponding to the simplified forms.
8. The apparatus of any one of claims 5 to 7, further comprising: and the adjusting module is used for adjusting the tool parameters and re-determining the tool to be used after determining the tool to be used according to the simplified form and the tool parameters corresponding to the simplified form.
9. A computer-readable storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the method for determining a tool according to any one of claims 1 to 4.
10. A processor for running a program, wherein the program when running performs the method of determining the tool of any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110832062.9A CN113470003B (en) | 2021-07-22 | 2021-07-22 | Tool determining method, device, computer readable storage medium and processor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110832062.9A CN113470003B (en) | 2021-07-22 | 2021-07-22 | Tool determining method, device, computer readable storage medium and processor |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113470003A true CN113470003A (en) | 2021-10-01 |
CN113470003B CN113470003B (en) | 2024-08-30 |
Family
ID=77881931
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110832062.9A Active CN113470003B (en) | 2021-07-22 | 2021-07-22 | Tool determining method, device, computer readable storage medium and processor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113470003B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105259252A (en) * | 2015-10-15 | 2016-01-20 | 浙江大学 | Method for automatically identifying defect type of polyethylene electrofusion joint through ultrasonic phased array inspection |
CN106264731A (en) * | 2016-10-11 | 2017-01-04 | 昆明医科大学第附属医院 | A kind of method based on point-to-point registration technique virtual knee joint single condyle replacement model construction |
CN109124836A (en) * | 2018-09-18 | 2019-01-04 | 北京爱康宜诚医疗器材有限公司 | The determination method and device of acetabular bone defect processing mode |
CN109171789A (en) * | 2018-09-18 | 2019-01-11 | 上海联影医疗科技有限公司 | A kind of calibration method and calibration system for diagnostic imaging equipment |
CN109949899A (en) * | 2019-02-28 | 2019-06-28 | 未艾医疗技术(深圳)有限公司 | Image three-dimensional measurement method, electronic equipment, storage medium and program product |
CN110443839A (en) * | 2019-07-22 | 2019-11-12 | 艾瑞迈迪科技石家庄有限公司 | A kind of skeleton model spatial registration method and device |
US20200090371A1 (en) * | 2018-09-18 | 2020-03-19 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for positioning an object |
CN111407390A (en) * | 2020-04-02 | 2020-07-14 | 马驰蛟 | Percutaneous 3D printing femoral head necrosis medullary core decompression channel guide plate and preparation method thereof |
CN112057107A (en) * | 2020-09-14 | 2020-12-11 | 无锡祥生医疗科技股份有限公司 | Ultrasonic scanning method, ultrasonic equipment and system |
CN112386334A (en) * | 2020-12-14 | 2021-02-23 | 中国人民解放军联勤保障部队第九二〇医院 | 3D printed femoral head necrosis navigation template and construction method and application thereof |
-
2021
- 2021-07-22 CN CN202110832062.9A patent/CN113470003B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105259252A (en) * | 2015-10-15 | 2016-01-20 | 浙江大学 | Method for automatically identifying defect type of polyethylene electrofusion joint through ultrasonic phased array inspection |
CN106264731A (en) * | 2016-10-11 | 2017-01-04 | 昆明医科大学第附属医院 | A kind of method based on point-to-point registration technique virtual knee joint single condyle replacement model construction |
CN109124836A (en) * | 2018-09-18 | 2019-01-04 | 北京爱康宜诚医疗器材有限公司 | The determination method and device of acetabular bone defect processing mode |
CN109171789A (en) * | 2018-09-18 | 2019-01-11 | 上海联影医疗科技有限公司 | A kind of calibration method and calibration system for diagnostic imaging equipment |
US20200090371A1 (en) * | 2018-09-18 | 2020-03-19 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for positioning an object |
CN109949899A (en) * | 2019-02-28 | 2019-06-28 | 未艾医疗技术(深圳)有限公司 | Image three-dimensional measurement method, electronic equipment, storage medium and program product |
CN110443839A (en) * | 2019-07-22 | 2019-11-12 | 艾瑞迈迪科技石家庄有限公司 | A kind of skeleton model spatial registration method and device |
CN111407390A (en) * | 2020-04-02 | 2020-07-14 | 马驰蛟 | Percutaneous 3D printing femoral head necrosis medullary core decompression channel guide plate and preparation method thereof |
CN112057107A (en) * | 2020-09-14 | 2020-12-11 | 无锡祥生医疗科技股份有限公司 | Ultrasonic scanning method, ultrasonic equipment and system |
CN112386334A (en) * | 2020-12-14 | 2021-02-23 | 中国人民解放军联勤保障部队第九二〇医院 | 3D printed femoral head necrosis navigation template and construction method and application thereof |
Also Published As
Publication number | Publication date |
---|---|
CN113470003B (en) | 2024-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kauffmann et al. | Computer-aided method for quantification of cartilage thickness and volume changes using MRI: validation study using a synthetic model | |
CN104574292B (en) | A kind of bearing calibration of CT images and device | |
CN100562291C (en) | A kind of at CT treatment of picture device, method and system | |
CN107468265A (en) | Position the check object to imaging method | |
CA2776952A1 (en) | Image data processing systems | |
US11571180B2 (en) | Systems providing images guiding surgery | |
JP2014240800A (en) | Inspection auxiliary device | |
CN108852510B (en) | Data processing method and device | |
CN108143501B (en) | Anatomical projection method based on body surface vein features | |
CN113470003B (en) | Tool determining method, device, computer readable storage medium and processor | |
CN117078840A (en) | Automatic quantitative calculation method for three-dimensional modeling of hip joint based on CT image | |
US11821888B2 (en) | Diagnostic support for skins and inspection method of skin | |
EP3271895B1 (en) | Segmentation of objects in image data using channel detection | |
WO2018109227A1 (en) | System providing images guiding surgery | |
Rianmora et al. | Structured light system-based selective data acquisition | |
Brusco et al. | Metrological validation for 3D modeling of dental plaster casts | |
KR20180115122A (en) | Image processing apparatus and method for generating virtual x-ray image | |
Cavagnini et al. | 3D optical body scanning: application to forensic medicine and to maxillofacial reconstruction | |
US20190041619A1 (en) | Optical polarization tractography systems, methods and devices | |
Galeta et al. | Comparison of 3D scanned kidney stone model versus computer-generated models from medical images. | |
Uyanik et al. | A method for determining 3D surface points of objects by a single camera and rotary stage | |
EP3452812B1 (en) | Support for inspecting skins | |
US12048555B1 (en) | Measuring system for measuring a surface of an object or skin of a person | |
Nixon et al. | New technique for 3D artery modelling by noninvasive ultrasound | |
Pasko | Optical 3D scanning methods in biological research-selected cases |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |