CN115578437B - Intestinal body focus depth data acquisition method and device, electronic equipment and storage medium - Google Patents

Intestinal body focus depth data acquisition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115578437B
CN115578437B CN202211530528.0A CN202211530528A CN115578437B CN 115578437 B CN115578437 B CN 115578437B CN 202211530528 A CN202211530528 A CN 202211530528A CN 115578437 B CN115578437 B CN 115578437B
Authority
CN
China
Prior art keywords
focus
real
modeling information
intestine
lesion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211530528.0A
Other languages
Chinese (zh)
Other versions
CN115578437A (en
Inventor
田攀
胡珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Endoangel Medical Technology Co Ltd
Original Assignee
Wuhan Endoangel Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Endoangel Medical Technology Co Ltd filed Critical Wuhan Endoangel Medical Technology Co Ltd
Priority to CN202211530528.0A priority Critical patent/CN115578437B/en
Publication of CN115578437A publication Critical patent/CN115578437A/en
Application granted granted Critical
Publication of CN115578437B publication Critical patent/CN115578437B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine

Abstract

The method comprises the steps of performing combined modeling on a real intestine body and a real focus, acquiring a first focus plane graph and a first focus depth graph through a simulation camera, training a depth estimation model by using the first focus plane graph and the first focus depth graph as training data, then shooting the real intestine body of a target case through the real camera to obtain a second focus plane graph, calling the trained depth estimation model to obtain a second focus depth graph, and obtaining accurate focus depth data from the second focus depth graph.

Description

Intestinal body focus depth data acquisition method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of medical assistance, in particular to a method and a device for acquiring intestinal body focus depth data, electronic equipment and a storage medium.
Background
In the course of gastrointestinal endoscopy, the size of the focus in the intestinal body is often required to be measured, and different treatment schemes are provided for the focus with different sizes. The real size of the focus is related to the measurement size in the RGB image and the depth data of the focus, two methods are mainly adopted when the depth data is obtained at present, firstly, an auxiliary depth camera is used for deeply penetrating into the intestinal cavity of a patient to obtain the depth image, secondly, an unsupervised method is adopted to obtain the depth image, however, the former can increase the pain of the patient and is not suitable for being carried out in an endoscope environment, the depth image obtained by the latter is a relative depth image, the absolute depth is needed for calculating the real size of the focus, even if the absolute depth is obtained by calculating the relative depth image through an amplification factor, the error is very large, and the calculation of the real size of the focus is inaccurate.
Therefore, the current process of calculating the real size of the intestinal body lesion has the technical problem that accurate lesion depth data is difficult to obtain, and improvement is needed.
Disclosure of Invention
The embodiment of the application provides a method and a device for acquiring intestinal body focus depth data, electronic equipment and a storage medium, which are used for solving the technical problem that accurate focus depth data are difficult to acquire in the process of calculating the real size of an intestinal body focus at present.
In order to solve the above technical problem, the embodiments of the present application provide the following technical solutions:
the application provides a method for acquiring intestinal body focus depth data, which comprises the following steps:
generating an intestinal body model with the same environment as the real intestinal body according to first modeling information of the real intestinal body, and generating a focus model on the inner wall of the intestinal body model according to second modeling information of a real focus;
controlling a simulation camera to carry a scope in the intestinal body model and shooting the focus model according to a preset scope carrying mode and preset camera parameters to obtain a plurality of first focus plane graphs, and obtaining first focus depth graphs corresponding to the first focus plane graphs according to the preset scope carrying mode and the intestinal body model;
training a depth estimation model based on each first lesion plan and the corresponding first lesion depth map;
acquiring a second focus plane image shot by a real camera in the real intestine of a target case, and inputting the second focus plane image into a trained depth estimation model to obtain a second focus depth image of the target case;
and obtaining the focus depth data of the target case according to the second focus depth map.
Meanwhile, the embodiment of the application also provides a device for acquiring the depth data of the intestinal body focus, which comprises:
the generating module is used for generating an intestinal body model with the same environment as the real intestinal body according to first modeling information of the real intestinal body and generating a focus model on the inner wall of the intestinal body model according to second modeling information of a real focus;
the first obtaining module is used for controlling a simulation camera to carry a scope in the intestinal body model and shooting the focus model according to a preset scope carrying mode and preset camera parameters to obtain a plurality of first focus plane graphs, and obtaining first focus depth graphs corresponding to the first focus plane graphs according to the preset scope carrying mode and the intestinal body model;
the training module is used for training the depth estimation model based on each first focus plane image and the corresponding first focus depth image;
the second obtaining module is used for obtaining a second focus plane image shot by a real camera in the real intestine of the target case, and inputting the second focus plane image into the trained depth estimation model to obtain a second focus depth image of the target case;
and the third obtaining module is used for obtaining the focus depth data of the target case according to the second focus depth map.
The application also provides an electronic device comprising a memory and a processor; the memory stores an application program, and the processor is configured to execute the application program in the memory to perform any of the steps in the method for acquiring intestinal body lesion depth data.
The embodiment of the application provides a computer-readable storage medium, which stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to execute the steps of the intestinal body focus depth data acquisition method.
Has the beneficial effects that: the method comprises the steps of firstly generating an intestine model which is the same as a real intestine environment according to first modeling information of a real intestine, generating a focus model on the inner wall of the intestine model according to second modeling information of a real focus, then controlling a simulation camera to carry out endoscope movement in the intestine model and shoot the focus model according to a preset endoscope moving mode and preset camera parameters to obtain a plurality of first focus plane maps, obtaining first focus depth maps corresponding to the first focus plane maps according to the preset endoscope moving mode and the intestine model, and training a depth estimation model based on the first focus plane maps and the corresponding first focus depth maps; and acquiring a second focus plane image which is shot by the real camera in the real intestine of the target case, inputting the second focus plane image into the trained depth estimation model to obtain a second focus depth image of the target case, and finally obtaining focus depth data of the target case according to the second focus depth image. According to the method, the real intestine body and the real focus are combined for modeling, the first focus plane graph and the first focus depth graph are acquired through the simulation camera, then the first focus plane graph and the first focus depth graph are used as training data to train the depth estimation model, then the second focus plane graph is acquired by shooting the real intestine body of the target case through the real camera subsequently, then the trained depth estimation model is called to acquire the second focus depth graph, accurate focus depth data can be acquired from the second focus depth graph, the whole process only needs the common endoscope camera to acquire the plane graph, and does not need to extend into the real intestine body through the auxiliary depth camera, accurate focus depth data can be acquired in a simpler mode, and calculation of the real size of the subsequent focus is facilitated.
Drawings
The technical solutions and other advantages of the present application will become apparent from the following detailed description of specific embodiments of the present application when taken in conjunction with the accompanying drawings.
Fig. 1 is a schematic view of an application scenario of the method for acquiring intestinal body lesion depth data according to the embodiment of the present application.
Fig. 2 is a schematic flowchart of a method for acquiring intestinal body lesion depth data according to an embodiment of the present disclosure.
FIG. 3 is a schematic view of an intestine model in an embodiment of the present application.
Fig. 4 is a real inner wall image and a rendered inner wall image of the intestine model in the embodiment of the present application.
Fig. 5 is a schematic view of different shapes of lesions in an embodiment of the present application.
Fig. 6 is a first lesion plan view and a first lesion depth map in an example of the present application.
Fig. 7 is a schematic structural diagram of an apparatus for acquiring intestinal body lesion depth data according to an embodiment of the present disclosure.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It should be apparent that the described embodiments are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a method and a device for acquiring intestinal body lesion depth data, electronic equipment and a computer readable storage medium, wherein the device for acquiring intestinal body lesion depth data can be integrated in the electronic equipment, and the electronic equipment can be a server or a terminal and other equipment.
Referring to fig. 1, fig. 1 is a schematic view of a scene applied by the method for acquiring intestinal body lesion depth data according to the embodiment of the present application, where the scene may include terminals and servers, and the terminals, the servers, and the terminals and the servers are connected and communicated through the internet formed by various gateways, and the application scene includes a modeling device 11 and a server 12; the modeling device 11 is a device installed with modeling software and having a human-computer interaction function, and the server 12 includes a local server and/or a remote server.
The modeling apparatus 11 and the server 12 are located in a wireless network or a wired network to realize data interaction therebetween, wherein:
the server 12 generates an intestine model having the same environment as the real intestine through the modeling software in the modeling device 11 according to the first modeling information of the real intestine, and generates a lesion model on the inner wall of the intestine model through the modeling software in the modeling device 11 according to the second modeling information of the real lesion. Then, the server 12 controls the simulation camera to carry the endoscope in the intestine body model according to the preset endoscope carrying mode and the camera parameters and shoots the focus model to obtain a plurality of first focus plane maps, obtains first focus depth maps corresponding to the first focus plane maps according to the preset endoscope carrying mode and the intestine body model, and trains the depth estimation model based on the first focus plane maps and the corresponding first focus depth maps. Finally, the server 12 obtains a second lesion plan image shot by the real camera in the real intestine of the target case, inputs the second lesion plan image into the trained depth estimation model to obtain a second lesion depth image of the target case, and obtains lesion depth data of the target case according to the second lesion depth image.
It should be noted that the system scenario diagram shown in fig. 1 is only an example, and the server and the scenario described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application, and as a person having ordinary skill in the art knows, with the evolution of the system and the occurrence of a new service scenario, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems. The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
Referring to fig. 2, fig. 2 is a schematic flow chart of a method for acquiring depth data of an intestinal body lesion according to an embodiment of the present application, the method specifically includes:
s1: and generating an intestinal body model which has the same environment as the real intestinal body according to the first modeling information of the real intestinal body, and generating a focus model on the inner wall of the intestinal body model according to the second modeling information of the real focus.
The real intestine body can be various intestine bodies such as colon and rectum which are really present in a human body, an intestine cavity is formed in the intestine body, the first modeling information is information which provides reference for the shape, the size and the like required by modeling based on the real intestine body, the environment information of the real intestine body is reflected, the environment information comprises the structure of the intestine body and the environment in the intestine body, an intestine body model which is the same as the environment of the real intestine body can be generated in modeling software based on the first modeling information, the modeling software can be blender software, the intestine body model is a 3D model, and the specific structure is shown in figure 3.
It should be noted that, because the intestine model cannot completely restore the environment of the real intestine, the same environment as referred to in this application means that the similarity of the environment is greater than a certain higher value, such as 99%, and the like, and the value can be set according to the required precision, and when the similarity is greater than the value, the environment is considered to be the same.
The real focus can be various focuses of polyps, ulcers, bleeding and the like which are actually present on the inner wall of the real intestinal body, the second modeling information is information which provides reference for the shape, the size and the like required by modeling based on the real focus, and a focus model can be generated on the inner wall of the intestinal body model in modeling software based on the second modeling information.
In one embodiment, the first modeling information includes K modeling information sets, the K modeling information sets correspond to K different case groups, each modeling information set includes intestine modeling information and a real inner wall image of a real intestine of the same case group, K is a positive integer, and S1 specifically includes: respectively generating K initial intestinal body models of real intestinal bodies of K different case groups according to the intestinal body modeling information of each modeling information group; and rendering the inner walls of the K initial intestine body models respectively according to the real inner wall image of each modeling information group to obtain K intestine body models which have the same real intestine body inner wall environment as that of K different case groups.
The cases refer to real persons, the case groups refer to groups formed by persons with certain same or similar attributes, the real intestine bodies of different persons have individual differences, in order to enable the intestine body model to cover more cases, meanwhile, the modeling cost is considered, all the cases can be divided into K case groups in advance, for example, classification conditions are constructed according to the attributes of age, gender and the like, a plurality of cases with certain same age and same gender can be used as one case group, a plurality of cases with the same age and same gender can be used as another case group, the real intestine body differences of the persons in the same case group are generally in a smaller range, the structures of all the real intestine bodies of one case group can be integrated, and the intestine body modeling information of the case group can be obtained.
Meanwhile, because the inner wall of the modeled intestine body model is not textured, and the inner wall of the real intestine body has various colors, folds and the like, in order to ensure that the intestine body model and the real intestine body have the same environment, image acquisition needs to be carried out on the inner wall of the real intestine body of the case group to obtain a real inner wall image of the case group, and the real inner wall image and the inner wall image of the case group are taken as modeling references of the inner wall environment, so that the intestine body modeling information and the real inner wall image of the same case group form a modeling information group of the case group.
And finally, for each case group, generating an initial intestine model by adopting the intestine modeling information of the case group, rendering the inner wall of the initial intestine model by adopting the real inner wall image of the case group to obtain an intestine model, wherein the intestine model and the real intestine have the same inner wall environment, as shown in fig. 4, the left image is a real inner wall image, and the right image is a rendered inner wall image of the intestine model. Similarly, the same inner wall environment means that the similarity between the two inner wall environments is greater than a certain higher value due to the limitation of precision. Through the process, K intestinal body models which are the same as the real intestinal body inner wall environments of K different case groups are obtained.
In an embodiment, the first modeling information further includes interference color modeling information, and for each established intestinal body model, different intestinal cavity colors can be set through simulation software, so that a certain color difference exists between the intestinal cavity color and a real intestinal body, and color disturbance is increased, so that a trained depth estimation model algorithm can have higher robustness subsequently.
In one embodiment, the first modeling information further includes illumination modeling information, and after the step of obtaining K intestine volume models that are identical to the real intestine volume inner wall environment of the K different case populations, comprises: and generating simulated illumination at different positions in each intestine model according to the illumination modeling information to obtain K intestine models which have the same real intestine illumination environment as that of K different case groups.
The illumination modeling information is information which provides reference for generating data required by simulation illumination according to the illumination environment inside the intestinal cavity of the real intestinal body, and specifically includes what light intensity each position inside the intestinal body model should have. Because the environment of the intestinal body includes the illumination environment in the intestinal cavity besides the inner wall environment, and the light intensity is different at different positions of the real intestinal cavity, if the focus is distributed at different positions, the shot focus plane diagram also has difference, so that the simulated illumination is generated at each position in the intestinal body model according to the illumination modeling information, so that the illumination environment is closer to the illumination environment of the real intestinal body. When the simulated illumination is generated, the illumination of some key positions in the real intestine body can be firstly acquired as illumination modeling information, then the simulated illumination is respectively generated at the key positions corresponding to the illumination modeling information in the intestine body model, or an illumination-position curve of the real intestine body can be firstly acquired and taken as the illumination modeling information, and then continuous and smooth simulated illumination is generated at all positions in the intestine body model according to the curve. Of course, the method for generating the simulated illumination is not limited in the present application, and other feasible methods may be used to generate the simulated illumination as long as the illumination environment of the intestine model is the same as that of the real intestine. Similarly, limited by the simulation accuracy, the same lighting environment also means that the similarity between the two lighting environments is greater than a certain higher value.
In one embodiment, the second modeling information includes M lesion morphology modeling information and N lesion position modeling information, M and N are positive integers, and S1 specifically includes: respectively traversing the M focus shape modeling information, the N focus position modeling information and the K intestine body models, and combining the focus shape modeling information, the focus position modeling information and the intestine body models which are obtained each time to obtain an M x N x K combined modeling information group; and establishing a model information group according to the M, N, K combinations to generate M, N, K intestinal body models with lesion models on the inner walls.
The lesion morphology modeling information is information for providing reference for lesion morphology required for modeling based on the morphology of a real lesion, and the lesion position modeling information is information for providing reference for a lesion position required for modeling based on the distribution position of the real lesion in the intestinal lumen. As the real focus has various different forms and can be distributed at various different positions of the intestinal cavity, in order to ensure the diversity of training data, M focus form modeling information can be obtained by taking M focus forms of the real focus as the basis, N focus position modeling information can be obtained by taking N focus positions of the real focus as the basis, the two focus form modeling information and the real focus position modeling information are regarded as two sets, and the elements in each set are traversed and combined with value-taking elements, so that M x N focus model modeling information groups can be formed. In the above steps, K intestine models may be established for K different case populations, M × N lesion model modeling information sets and K intestine models are respectively regarded as two sets, and traversal value taking and value element combining are performed on elements in each set, so that M × N × K combined modeling information sets may be formed.
The above process can obtain M x N focus model modeling information groups, and then combine the M x N focus model modeling information groups with K intestine body model modeling information groups, or directly regard M focus shape modeling information, N focus position modeling information and K intestine body models as three sets, and respectively traverse values and combine to obtain M x N K model combined modeling information groups. And finally, generating M, N, K intestinal body models with lesion models on the inner walls based on the M, N, K model combination modeling information sets.
It should be noted that, when generating the combined model of the intestine body model and the lesion model, it may not be necessary to actually establish M × N × K combined models, but only K intestine body models may be established, then M × N lesion models are sequentially generated in each intestine body model and data acquisition is sequentially performed, after each acquisition is completed, the current lesion model may be hidden or removed, and then the next lesion model is generated and data acquisition is performed.
In one embodiment, before S1, the method further comprises: acquiring a focus form information set, wherein the focus form information set comprises a focus color information, b focus size information and c focus shape information, a, b and c are positive integers, and a, b and c are equal to M; traversing the a focus color information, the b focus size information and the c focus shape information respectively, and combining the focus color information, the focus size information and the focus shape information obtained each time to obtain a b c focus shape information; and obtaining M lesion morphological modeling information according to the a, b, c lesion morphological information.
The focus shape is formed by combining focus color, focus size and focus shape, when any one of the focus shape is changed, the focus shape can be changed, so that a focus color information of a real focus can be obtained, the color can be deep red, light red, dark red, bright red and the like, b focus size information of the real focus can be obtained, the size can be 1mm-15mm and the like, c focus shape information of the real focus can be obtained, the shape can be circular, oval, flat, pedicle-free, pedicle-containing and the like, the three are combined to form a focus shape information set, then the three are regarded as three sets, elements in each set are traversed, element value combination is carried out, a x b c focus shape information can be obtained, and M focus shape modeling information can be generated by taking the three as the basis. Based on these lesion morphological modeling information, various lesion models with different morphologies can be generated on the inner wall of the intestine model, as shown in fig. 5, an elliptical lesion model is provided in the dashed box in the left drawing, and a flat lesion model is provided in the dashed box in the right drawing.
S2: controlling a simulation camera to carry a scope in the intestinal body model and shooting a focus model according to a preset scope carrying mode and preset camera parameters to obtain a plurality of first focus plane graphs, and obtaining first focus depth graphs corresponding to the first focus plane graphs according to the preset scope carrying mode and the intestinal body model.
The simulation camera is a simulation tool which moves and rotates under the control of relevant logic in the intestinal body model and executes shooting tasks, and is not a real camera, and a simulation object of the simulation camera is an intestinal endoscope used in a real scene. The preset mirror moving mode comprises a preset mirror moving track of the simulation camera and a preset orientation of the simulation camera at each track point in the mirror moving track, and the preset camera parameter main packetLens focal length F including analog camera 1 Transverse aperture S of an imaging sensor w1 And a longitudinal bore S h1 After the two control parameters are determined, the simulation camera can be controlled to move to a certain track point in the intestine body model according to a specific endoscope moving track, an area where the focus model is located is shot at the track point according to a specific orientation (shooting angle), a first focus plane graph, namely an RGB (red, green and blue) graph of the focus is obtained after shooting, the RGB graph reflects information such as the shape, texture, position and size of the intestine body model and the focus model in the shooting area, and is specifically shown as a left graph in fig. 6, wherein a focus model is arranged in a broken line frame, and other areas are intestine body models.
Because the analog camera is shot based on a preset endoscope moving mode, namely, the distance between the analog camera and the intestinal body in the endoscope moving process is preset, and the shooting angle with the focus model is also preset, when the analog camera shoots the focus model at a certain locus point to obtain a first focus plane graph, the depth of an entity corresponding to each position point in the first focus plane graph and the depth of the analog camera are determined, wherein the depth specifically refers to the distance between the entity corresponding to each position point and the imaging plane of the analog camera, and the distances are expressed by different colors, so that the first focus depth graph shown in the right graph in fig. 6 can be obtained, wherein a focus model is arranged in a dotted line frame, and other regions are the intestinal body model.
In an actual scene, according to the mainstream visual field range of the endoscope lens and the habit of using the lens by an endoscope physician, the distance between the endoscope lens and a focus is usually between 2mm and 120mm, and then the distance between a simulation camera and a focus model can be defined to be between 2mm and 120mm in a preset lens moving mode, so that the simulation camera is closer to a real use scene.
In one embodiment, S2 specifically includes: acquiring P kinds of preset mirror moving modes and Q kinds of preset camera parameters, wherein P and Q are positive integers; traversing values of P preset mirror moving modes and Q preset camera parameters respectively, and combining the preset mirror moving modes and the preset camera parameters obtained each time to obtain P x Q preset shooting modes; and respectively controlling the simulation camera to shoot the focus model according to P × Q preset shooting modes for each intestinal body model to obtain a plurality of first focus plane graphs.
In a real scene, the focus is usually shot at different shooting angles and different depths, the camera has various models during shooting, and the parameters of the cameras of different models are different. In order to ensure the diversity of training data, P preset mirror moving modes can be defined to meet the requirements of different shooting angles and depths, Q preset camera parameters are defined to meet the shooting requirements of cameras of different models, the two preset camera parameters are regarded as two sets, and elements in each set are subjected to traversal value taking and value element combination, so that P × Q preset shooting modes can be formed. Then, for each intestine model, only at least P × Q shots can be simulated to obtain a corresponding first lesion plan, so that the training data can cover various shot characters, depths and camera models.
S3: a depth estimation model is trained based on each first lesion plan and the corresponding first lesion depth map.
The depth estimation model is a model for predicting the depth of each position point in an image, that is, a plane map is input, the depth of each position point in the image can be estimated, and a depth map is output. In the above steps, various combined model schemes are obtained by combining various intestine body models and focus models, and the focus models are photographed in various photographing modes in each combined model scheme to obtain a plurality of first focus plane graphs, wherein each first focus plane graph has a corresponding first focus depth graph. Each first lesion plan and the corresponding first lesion depth map can be used as training input data and training output data respectively, all the first lesion plan maps and the corresponding first lesion depth maps form a training data set, and the depth estimation model is trained based on the training data set until the depth estimation accuracy of the depth estimation model is expected.
S4: and acquiring a second focus plane graph shot by the real camera in the real intestine of the target case, and inputting the second focus plane graph into the trained depth estimation model to obtain a second focus depth graph of the target case.
The target case refers to a patient needing to be subjected to focus diagnosis at present, a real camera is used for shooting a real focus in a real intestinal body of the target case to obtain a second focus plane diagram, and the real camera refers to a real intestinal endoscope. Similarly, the second lesion plan includes the actual lesion and a portion of the actual intestine in the area of the actual lesion. And inputting the second focus plane graph into the trained depth estimation model, and outputting a second focus depth graph of the target case, wherein different colors in the second focus depth graph represent the depths of different position points of the intestinal body and the focus and the real camera.
In the embodiment of the application, because the coverage rate of the training data set on the real scene is high, after the model training is completed, the second focal plane map is input into the model, the depth accuracy of the obtained second focal depth map is high, and the accuracy of the subsequent calculation is improved.
S5: and obtaining the focus depth data of the target case according to the second focus depth map.
And analyzing the color of each position point of the real focus in the second focus depth map to obtain the depth of each position point and the imaging plane of the real camera, and synthesizing the depths to obtain focus depth data of the target case.
In one embodiment, after S5, further comprising: obtaining a lesion measurement size of the target case according to the second lesion plan; and obtaining the real size of the focus of the target case according to the focus measuring size, the focus depth data and the real camera parameters of the real camera.
When the real camera shoots the focus, optical image collection is firstly carried out through the lens, then the optical image is projected onto the sensor, and the optical image is converted into an electric signal and a digital signal in sequence and finally processed into a second focus plane graph. Measuring the second focal plane to obtain focal measurement size, including the width W of the second focal plane 0 And high H 0 And the first circumscribed rectangle of the actual lesionWidth W 1 And high H 1 The circumscribed rectangle is the smallest rectangle capable of completely covering the outline of the focus in the second focus plane. The test size of the focus can only reflect the size of the focus in the second focus plane image, and the doctor needs to convert the test size of the focus if the doctor needs to obtain the real size of the focus finally during diagnosis.
Based on the imaging principle of the camera, the real size of the focus is related to the focus test size, focus depth data and real camera parameters. Specifically, the real camera parameters include a lens focal length F of the real camera 2 Transverse aperture S of an imaging sensor w2 And a longitudinal bore S h2 And d is the depth of the real focus and the imaging plane of the real camera in the focus depth data, and the transverse pixel focal length f of the real camera is obtained by calculation x And a longitudinal pixel focal length f y Which satisfies the following formula:
Figure 256954DEST_PATH_IMAGE002
(equation 1).
Figure 987144DEST_PATH_IMAGE004
(equation 2).
The mode of obtaining the pixel focal length through calculation is more accurate compared with the mode of directly extracting the pixel focal length by shooting a chessboard picture. Then, let W be the width of the second circumscribed rectangle in the actual size of the lesion 2 Height is H 2 Then it satisfies the following formula:
Figure 32461DEST_PATH_IMAGE006
(equation 3).
Figure 755566DEST_PATH_IMAGE008
(equation 4).
Through the steps, the real size of the focus can be obtained, and a doctor can judge the severity of the focus based on the size and further diagnose the disease condition of the target case by combining other related data.
According to the embodiment, the method for acquiring the depth data of the intestinal body focus is characterized in that a real intestinal body and a real focus are combined and modeled, a first focus plane graph and a first focus depth graph are acquired through a simulation camera and then serve as training data to train a depth estimation model, the real intestinal body of a target case is shot through the real camera to acquire a second focus plane graph, the trained depth estimation model is called to acquire a second focus depth graph, accurate focus depth data can be acquired from the second focus depth graph, a common endoscope camera is only needed to acquire the plane graph in the whole process, an auxiliary depth camera is not needed to stretch into the real intestinal body, accurate focus depth data are acquired in a simpler mode, and calculation of the real size of the subsequent focus is facilitated.
Based on the method described in the above embodiments, the present embodiment will be further described from the perspective of the intestinal body lesion depth data acquisition device, referring to fig. 7, the intestinal body lesion depth data acquisition device may include:
the generation module 10 is configured to generate an intestine model having the same environment as the real intestine according to first modeling information of the real intestine, and generate a lesion model on an inner wall of the intestine model according to second modeling information of a real lesion;
the first obtaining module 20 is configured to control a simulation camera to carry a scope in the intestine body model and shoot the focus model according to a preset scope carrying mode and preset camera parameters to obtain a plurality of first focus plane maps, and obtain first focus depth maps corresponding to the first focus plane maps according to the preset scope carrying mode and the intestine body model;
a training module 30, configured to train a depth estimation model based on each first lesion plan and the corresponding first lesion depth map;
a second obtaining module 40, configured to obtain a second focal plane image captured by a real camera in a real intestine of a target case, and input the second focal plane image into a trained depth estimation model to obtain a second focal depth image of the target case;
a third obtaining module 50, configured to obtain, according to the second focal depth map, focal depth data of the target case.
In one embodiment, the first modeling information includes K modeling information sets, the K modeling information sets respectively correspond to K different case populations, each modeling information set includes intestine modeling information and a real inner wall image of a real intestine of the same case population, K is a positive integer, and the generating module 10 includes:
the first generation submodule is used for respectively generating K initial intestine models of real intestines of K different case groups according to the intestine modeling information of each modeling information group;
and the first obtaining submodule is used for respectively rendering the inner walls of the K initial intestine body models according to the real inner wall image of each modeling information group to obtain K intestine body models which have the same real intestine body inner wall environments as those of the K different case groups.
In an embodiment, the first modeling information further comprises illumination modeling information, and the generating module 10 further comprises:
and the second generation submodule is used for generating simulated illumination at different positions in each intestinal body model according to the illumination modeling information to obtain K intestinal body models which are the same as the real intestinal body illumination environments of K different case groups.
In one embodiment, the second modeling information includes M lesion morphology modeling information and N lesion position modeling information, M and N being positive integers, and the generation module 10 includes:
a second obtaining submodule, configured to perform traversal value calculation on the M lesion shape modeling information, the N lesion position modeling information, and the K intestine models, respectively, and combine the lesion shape modeling information, the lesion position modeling information, and the intestine models obtained each time to obtain an M × N × K combined modeling information group;
and the third generation submodule is used for generating M x N x K intestinal body models with lesion models on the inner walls according to the M x N x K combined modeling information group.
In one embodiment, the intestinal body lesion depth data acquiring device further comprises:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a focus morphological information set, the focus morphological information set comprises a focus color information, b focus size information and c focus shape information, a, b and c are positive integers, and a, b and c are equal to M;
a fourth obtaining module, configured to perform traversal value calculation on the a pieces of lesion color information, the b pieces of lesion size information, and the c pieces of lesion shape information, and combine the lesion color information, the b pieces of lesion size information, and the c pieces of lesion shape information obtained each time to obtain a × b × c pieces of lesion shape information;
and a fifth obtaining module, configured to obtain the M pieces of lesion morphology modeling information according to the a × b × c pieces of lesion morphology information.
In one embodiment, the first obtaining module 20 includes:
the first acquisition submodule is used for acquiring P types of preset mirror moving modes and Q types of preset camera parameters, and P and Q are positive integers;
the third obtaining submodule is used for respectively traversing the P kinds of preset mirror moving modes and the Q kinds of preset camera parameters, and combining the preset mirror moving modes and the preset camera parameters obtained each time to obtain P x Q kinds of preset shooting modes;
and the fourth obtaining submodule is used for respectively controlling the simulation camera to shoot the focus model according to the P x Q preset shooting modes for each intestinal body model so as to obtain a plurality of first focus plane graphs.
In one embodiment, the intestinal body lesion depth data acquiring device further comprises:
a sixth obtaining module, configured to obtain a lesion measurement size of the target case according to the second lesion plan;
and the seventh obtaining module is used for obtaining the real size of the focus of the target case according to the focus measuring size, the focus depth data and the real camera parameters of the real camera.
Different from the prior art, the intestinal body lesion depth data acquisition device provided by the application is characterized in that a real intestinal body and a real lesion are subjected to combined modeling, a first lesion plan and a first lesion depth map are acquired through a simulation camera and then are used as training data to train a depth estimation model, then a real intestinal body of a target case is shot through the real camera to acquire a second lesion plan subsequently, then the trained depth estimation model is called to acquire the second lesion depth map, accurate lesion depth data can be acquired from the second lesion depth map, and the plane map is acquired only through a common endoscope camera in the whole process without extending into the real intestinal body through an auxiliary depth camera, so that accurate lesion depth data can be acquired in a simpler mode, and calculation of the real size of the subsequent lesion is facilitated.
Accordingly, embodiments of the present application also provide an electronic device, as shown in fig. 8, which may include Radio Frequency (RF) circuitry 1001, a memory 1002 including one or more computer-readable storage media, an input unit 1003, a display unit 1004, a sensor 1005, audio circuitry 1006, a WiFi module 1007, a processor 1008 including one or more processing cores, and a power supply 1009. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 8 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the rf circuit 1001 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then sends the received downlink information to the one or more processors 1008 for processing; in addition, data relating to uplink is transmitted to the base station. The memory 1002 may be used to store software programs and modules that the processor 1008 executes to perform various functional applications and lesion depth data by executing the software programs and modules stored in the memory 1002. The input unit 1003 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to client settings and function control.
The display unit 1004 may be used to display information input by or provided to the client, as well as various graphical client interfaces of the server, which may be made up of graphics, text, icons, video, and any combination thereof.
The electronic device may also include at least one sensor 1005, such as a light sensor, a motion sensor, and other sensors. The audio circuitry 1006 includes speakers that can provide an audio interface between the customer and the electronic device.
WiFi belongs to short-distance wireless transmission technology, and the electronic equipment can help a client to send and receive e-mails, browse webpages, follow-up streaming media and the like through the WiFi module 1007, and provides wireless broadband internet follow-up for the client. Although fig. 8 shows the WiFi module 1007, it is understood that it does not belong to the essential constitution of the electronic device, and may be omitted entirely as needed within a range not changing the essence of the application.
The processor 1008 is a control center of the electronic device, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 1002 and calling data stored in the memory 1002, thereby integrally monitoring the mobile phone.
The electronic device also includes a power source 1009 (e.g., a battery) for providing power to various components, preferably, the power source may be logically coupled to the processor 1008 via a power management system, such that the power management system may manage charging, discharging, and power consumption.
Although not shown, the electronic device may further include a camera, a bluetooth module, and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 1008 in the server loads the executable file corresponding to the process of one or more application programs into the memory 1002 according to the following instructions, and the processor 1008 runs the application program stored in the memory 1002, so as to implement the following functions:
generating an intestinal body model with the same environment as the real intestinal body according to first modeling information of the real intestinal body, and generating a focus model on the inner wall of the intestinal body model according to second modeling information of a real focus; controlling a simulation camera to carry a scope in the intestinal body model and shooting the focus model according to a preset scope carrying mode and preset camera parameters to obtain a plurality of first focus plane graphs, and obtaining first focus depth graphs corresponding to the first focus plane graphs according to the preset scope carrying mode and the intestinal body model; training a depth estimation model based on each first lesion plan and the corresponding first lesion depth map; acquiring a second focus plane image shot by a real camera in the real intestine of a target case, and inputting the second focus plane image into a trained depth estimation model to obtain a second focus depth image of the target case; and obtaining the focus depth data of the target case according to the second focus depth map.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description, and are not described herein again.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer-readable storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to implement the following functions:
generating an intestinal body model with the same environment as the real intestinal body according to first modeling information of the real intestinal body, and generating a focus model on the inner wall of the intestinal body model according to second modeling information of a real focus; controlling a simulation camera to carry a scope in the intestinal body model and shooting the focus model according to a preset scope carrying mode and preset camera parameters to obtain a plurality of first focus plane graphs, and obtaining first focus depth graphs corresponding to the first focus plane graphs according to the preset scope carrying mode and the intestinal body model; training a depth estimation model based on each first lesion plan and the corresponding first lesion depth map; acquiring a second focus plane image shot by a real camera in the real intestine of a target case, and inputting the second focus plane image into a trained depth estimation model to obtain a second focus depth image of the target case; and obtaining the focus depth data of the target case according to the second focus depth map.
The method, the device, the electronic device and the computer-readable storage medium for acquiring intestinal body lesion depth data provided by the embodiment of the present application are described in detail above, and a specific example is applied to illustrate the principle and the implementation of the present application, and the description of the above embodiment is only used to help understanding the technical scheme and the core idea of the present application; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the present disclosure as defined by the appended claims.

Claims (7)

1. A method for acquiring depth data of intestinal body lesions is characterized by comprising the following steps:
generating an intestinal body model with the same environment as the real intestinal body according to first modeling information of the real intestinal body, and generating a focus model on the inner wall of the intestinal body model according to second modeling information of a real focus; the first modeling information comprises K modeling information groups and illumination modeling information, the K modeling information groups correspond to K different case groups respectively, each modeling information group comprises intestine modeling information and a real inner wall image of a real intestine of the same case group, K is a positive integer, and the step of generating the intestine model comprises the following steps: respectively generating K initial intestine body models of real intestine bodies of K different case groups according to the intestine body modeling information of each modeling information group, respectively rendering the inner walls of the K initial intestine body models according to the real inner wall image of each modeling information group to obtain K intestine body models with the same real intestine body inner wall environment of the K different case groups, and generating simulated illumination at different positions in each intestine body model according to the illumination modeling information to obtain K intestine body models with the same real intestine body illumination environment of the K different case groups; the second modeling information includes M lesion morphological modeling information and N lesion position modeling information, M and N are positive integers, and the step of generating a lesion model includes: respectively carrying out traversal value taking on the M focus form modeling information, the N focus position modeling information and the K intestinal body models, combining the focus form modeling information, the focus position modeling information and the intestinal body models which are obtained each time to obtain an M X N K combined modeling information set, and generating M X N K intestinal body models with focus models on the inner walls according to the M X N K combined modeling information set;
controlling a simulation camera to move a scope in the intestine body model and shooting the focus model according to a preset scope moving mode and preset camera parameters to obtain a plurality of first focus plane graphs, and obtaining first focus depth graphs corresponding to the first focus plane graphs according to the preset scope moving mode and the intestine body model; the preset mirror moving mode comprises a preset mirror moving track of the analog camera and a preset orientation of the analog camera at each track point in the preset mirror moving track, and the preset camera parameters comprise a lens focal length of the analog camera, a transverse aperture and a longitudinal aperture of the imaging sensor;
training a depth estimation model based on each first lesion plan and the corresponding first lesion depth map;
acquiring a second focus plane image shot by a real camera in the real intestine of a target case, and inputting the second focus plane image into a trained depth estimation model to obtain a second focus depth image of the target case;
and obtaining the focus depth data of the target case according to the second focus depth map.
2. The method for acquiring intestinal body lesion depth data according to claim 1, further comprising, before the step of generating a lesion model on an inner wall of the intestinal body model based on second modeling information of a real lesion:
acquiring a focus form information set, wherein the focus form information set comprises a focus color information, b focus size information and c focus shape information, a, b and c are positive integers, and a, b and c are equal to M;
traversing the a focus color information, the b focus size information and the c focus shape information, and combining the focus color information, the focus size information and the focus shape information obtained each time to obtain a × b × c focus shape information;
and obtaining the morphological modeling information of the M focuses according to the morphological information of the a, b and c focuses.
3. The method for acquiring the intestinal body lesion depth data according to claim 1, wherein the step of controlling a simulation camera to move a scope in the intestinal body model and shooting the lesion model according to a preset scope moving mode and preset camera parameters to obtain a plurality of first lesion plan views comprises:
acquiring P types of preset mirror moving modes and Q types of preset camera parameters, wherein P and Q are positive integers;
traversing values are respectively taken for the P kinds of preset mirror moving modes and the Q kinds of preset camera parameters, and the preset mirror moving modes and the preset camera parameters obtained each time are combined to obtain P x Q kinds of preset shooting modes;
and respectively controlling a simulation camera to shoot the focus model according to the P × Q preset shooting modes for each intestinal body model to obtain a plurality of first focus plane graphs.
4. The method of claim 1, further comprising, after the step of obtaining lesion depth data of the target case from the second lesion depth map:
obtaining a lesion measurement size of the target case according to the second lesion plan;
and obtaining the real size of the focus of the target case according to the focus measuring size, the focus depth data and the real camera parameters of the real camera.
5. An intestinal body focus depth data acquisition device, comprising:
the generating module is used for generating an intestinal body model with the same environment as the real intestinal body according to first modeling information of the real intestinal body and generating a focus model on the inner wall of the intestinal body model according to second modeling information of a real focus; the first modeling information comprises K modeling information groups and illumination modeling information, the K modeling information groups correspond to K different case groups respectively, each modeling information group comprises intestine body modeling information and a real inner wall image of a real intestine body of the same case group, K is a positive integer, and the generating module is used for: respectively generating K initial intestine body models of real intestine bodies of K different case groups according to the intestine body modeling information of each modeling information group, respectively rendering the inner walls of the K initial intestine body models according to the real inner wall image of each modeling information group to obtain K intestine body models with the same real intestine body inner wall environment of the K different case groups, and generating simulated illumination at different positions in each intestine body model according to the illumination modeling information to obtain K intestine body models with the same real intestine body illumination environment of the K different case groups; the second modeling information includes M lesion morphology modeling information and N lesion position modeling information, M and N are positive integers, and the generation module is configured to: respectively carrying out traversal value taking on the M focus form modeling information, the N focus position modeling information and the K intestinal body models, combining the focus form modeling information, the focus position modeling information and the intestinal body models which are obtained each time to obtain an M X N K combined modeling information set, and generating M X N K intestinal body models with focus models on the inner walls according to the M X N K combined modeling information set;
the first obtaining module is used for controlling a simulation camera to carry a scope in the intestinal body model and shooting the focus model according to a preset scope carrying mode and preset camera parameters to obtain a plurality of first focus plane graphs, and obtaining first focus depth graphs corresponding to the first focus plane graphs according to the preset scope carrying mode and the intestinal body model; the preset mirror moving mode comprises a preset mirror moving track of the analog camera and a preset orientation of the analog camera at each track point in the preset mirror moving track, and the preset camera parameters comprise a lens focal length of the analog camera, a transverse aperture and a longitudinal aperture of the imaging sensor;
the training module is used for training the depth estimation model based on each first focus plane image and the corresponding first focus depth image;
the second obtaining module is used for obtaining a second focus plane image shot by a real camera in the real intestine of the target case, and inputting the second focus plane image into the trained depth estimation model to obtain a second focus depth image of the target case;
and the third obtaining module is used for obtaining the focus depth data of the target case according to the second focus depth map.
6. An electronic device comprising a memory and a processor; the memory stores an application program, and the processor is configured to execute the application program in the memory to perform the steps of the intestinal body lesion depth data acquisition method according to any one of claims 1 to 4.
7. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to implement the steps of the intestinal body lesion depth data acquisition method according to any one of claims 1 to 4.
CN202211530528.0A 2022-12-01 2022-12-01 Intestinal body focus depth data acquisition method and device, electronic equipment and storage medium Active CN115578437B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211530528.0A CN115578437B (en) 2022-12-01 2022-12-01 Intestinal body focus depth data acquisition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211530528.0A CN115578437B (en) 2022-12-01 2022-12-01 Intestinal body focus depth data acquisition method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115578437A CN115578437A (en) 2023-01-06
CN115578437B true CN115578437B (en) 2023-03-14

Family

ID=84590697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211530528.0A Active CN115578437B (en) 2022-12-01 2022-12-01 Intestinal body focus depth data acquisition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115578437B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103356155A (en) * 2013-06-24 2013-10-23 清华大学深圳研究生院 Virtual endoscope assisted cavity lesion examination system
CN108695001A (en) * 2018-07-16 2018-10-23 武汉大学人民医院(湖北省人民医院) A kind of cancer lesion horizon prediction auxiliary system and method based on deep learning
CN109615633A (en) * 2018-11-28 2019-04-12 武汉大学人民医院(湖北省人民医院) Crohn disease assistant diagnosis system and method under a kind of colonoscopy based on deep learning
CN112528529A (en) * 2020-09-08 2021-03-19 苏州普瑞斯仁信息科技有限公司 Laparoscopic naked eye simulation reality method
CN112712528A (en) * 2020-12-24 2021-04-27 浙江工业大学 Multi-scale U-shaped residual encoder and integral reverse attention mechanism combined intestinal tract lesion segmentation method
CN112786189A (en) * 2021-01-05 2021-05-11 重庆邮电大学 Intelligent diagnosis system for new coronary pneumonia based on deep learning
CN113222051A (en) * 2021-05-26 2021-08-06 长春大学 Image labeling method based on small intestine focus characteristics
CN114004969A (en) * 2021-09-15 2022-02-01 苏州中科华影健康科技有限公司 Endoscope image focal zone detection method, device, equipment and storage medium
CN114533148A (en) * 2022-02-15 2022-05-27 佳木斯大学 Sampling system for stomach cancer detection in digestive system department
WO2022141882A1 (en) * 2020-12-30 2022-07-07 上海睿刀医疗科技有限公司 Lesion recognition model construction apparatus and system based on historical pathological information
CN115115810A (en) * 2022-06-29 2022-09-27 广东工业大学 Multi-person collaborative focus positioning and enhanced display method based on spatial posture capture

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3252738A1 (en) * 2016-05-30 2017-12-06 Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. Method of assessing the performance of a human or robot carrying out a medical procedure and assessment tool

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103356155A (en) * 2013-06-24 2013-10-23 清华大学深圳研究生院 Virtual endoscope assisted cavity lesion examination system
CN108695001A (en) * 2018-07-16 2018-10-23 武汉大学人民医院(湖北省人民医院) A kind of cancer lesion horizon prediction auxiliary system and method based on deep learning
CN109615633A (en) * 2018-11-28 2019-04-12 武汉大学人民医院(湖北省人民医院) Crohn disease assistant diagnosis system and method under a kind of colonoscopy based on deep learning
CN112528529A (en) * 2020-09-08 2021-03-19 苏州普瑞斯仁信息科技有限公司 Laparoscopic naked eye simulation reality method
CN112712528A (en) * 2020-12-24 2021-04-27 浙江工业大学 Multi-scale U-shaped residual encoder and integral reverse attention mechanism combined intestinal tract lesion segmentation method
WO2022141882A1 (en) * 2020-12-30 2022-07-07 上海睿刀医疗科技有限公司 Lesion recognition model construction apparatus and system based on historical pathological information
CN112786189A (en) * 2021-01-05 2021-05-11 重庆邮电大学 Intelligent diagnosis system for new coronary pneumonia based on deep learning
CN113222051A (en) * 2021-05-26 2021-08-06 长春大学 Image labeling method based on small intestine focus characteristics
CN114004969A (en) * 2021-09-15 2022-02-01 苏州中科华影健康科技有限公司 Endoscope image focal zone detection method, device, equipment and storage medium
CN114533148A (en) * 2022-02-15 2022-05-27 佳木斯大学 Sampling system for stomach cancer detection in digestive system department
CN115115810A (en) * 2022-06-29 2022-09-27 广东工业大学 Multi-person collaborative focus positioning and enhanced display method based on spatial posture capture

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Evidence for Immunosurveillance in Intestinal Premalignant Lesions;M. Karlsson等;《CLINICAL IMMUNOLOGY》;第362-368页 *
消化道内窥镜图像异常的人工智能诊断方法研究进展;张璐璐等;《生物医学工程学进展》;第23-27页 *

Also Published As

Publication number Publication date
CN115578437A (en) 2023-01-06

Similar Documents

Publication Publication Date Title
CN109285215B (en) Human body three-dimensional model reconstruction method and device and storage medium
US10521924B2 (en) System and method for size estimation of in-vivo objects
JP2022518745A (en) Target position acquisition method, equipment, computer equipment and computer program
US20240046557A1 (en) Method, device, and non-transitory computer-readable storage medium for reconstructing a three-dimensional model
CN113205560A (en) Calibration method, device and equipment of multi-depth camera and storage medium
US20190117167A1 (en) Image processing apparatus, learning device, image processing method, method of creating classification criterion, learning method, and computer readable recording medium
CN114663575A (en) Method, apparatus and computer-readable storage medium for image processing
CN115578437B (en) Intestinal body focus depth data acquisition method and device, electronic equipment and storage medium
CN111415308A (en) Ultrasonic image processing method and communication terminal
CN110766631A (en) Face image modification method and device, electronic equipment and computer readable medium
CN113744266B (en) Method and device for displaying focus detection frame, electronic equipment and storage medium
CN114697516B (en) Three-dimensional model reconstruction method, apparatus and storage medium
JP6698699B2 (en) Image processing apparatus, image processing method and program
CN111931725B (en) Human motion recognition method, device and storage medium
CN111050086B (en) Image processing method, device and equipment
CN113012207A (en) Image registration method and device
Ahmad et al. 3D reconstruction of gastrointestinal regions using shape-from-focus
CN115578385B (en) Method and device for acquiring disease information under enteroscope, electronic equipment and storage medium
CN116506732B (en) Image snapshot anti-shake method, device and system and computer equipment
US20230414066A1 (en) Endoscope image processing apparatus, endoscope image processing method, and endoscope image processing program
CN117689800A (en) Stylized image generation method and device, electronic equipment and storage medium
CN114648543A (en) Remote ultrasonic image annotation method, terminal device and storage medium
WO2023057986A2 (en) Computer-implemented systems and methods for analyzing examination quality for an endoscopic procedure
CN116863460A (en) Gesture recognition and training method, device, equipment and medium for gesture recognition model
CN116309770A (en) Binocular image parallax estimation method, binocular image parallax estimation device, binocular image parallax estimation equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20230106

Assignee: Shanghai Chuxian Medical Technology Co.,Ltd.

Assignor: Wuhan Chujingling Medical Technology Co.,Ltd.

Contract record no.: X2023420000041

Denomination of invention: Method, device, electronic device, and storage medium for obtaining intestinal lesion depth data

Granted publication date: 20230314

License type: Common License

Record date: 20230321

EE01 Entry into force of recordation of patent licensing contract