WO2021082544A1 - 图像处理及神经网络训练方法、装置、设备、介质和程序 - Google Patents

图像处理及神经网络训练方法、装置、设备、介质和程序 Download PDF

Info

Publication number
WO2021082544A1
WO2021082544A1 PCT/CN2020/103635 CN2020103635W WO2021082544A1 WO 2021082544 A1 WO2021082544 A1 WO 2021082544A1 CN 2020103635 W CN2020103635 W CN 2020103635W WO 2021082544 A1 WO2021082544 A1 WO 2021082544A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
tracked
branch
target
tracking
Prior art date
Application number
PCT/CN2020/103635
Other languages
English (en)
French (fr)
Inventor
李卓威
夏清
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to JP2021539385A priority Critical patent/JP2022516196A/ja
Publication of WO2021082544A1 publication Critical patent/WO2021082544A1/zh
Priority to US17/723,580 priority patent/US20220237806A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Definitions

  • This application relates to image analysis technology, but is not limited to an image processing and neural network training method, device, electronic equipment, computer storage medium, and computer program.
  • the extraction of pixels for the target to be tracked is helpful for further research on the target to be tracked.
  • the target to be tracked For example, for more complicated blood vessels such as coronary arteries and cranial blood vessels, how to perform the pixel points of blood vessel images
  • the extraction of is gradually called a research hotspot; however, in related technologies, how to achieve the pixel tracking and extraction of the target to be tracked is a technical problem that needs to be solved urgently.
  • the embodiments of the present application expect to provide an image processing and neural network training method, device, electronic equipment, computer storage medium, and computer program.
  • the embodiment of the present application provides an image processing method, the method includes:
  • the current pixel is tracked to obtain the next pixel of the current pixel.
  • the next pixel can be determined from the current pixel according to the evaluation value of the pixel to be selected, that is, the pixel tracking of the target to be tracked can be accurately realized And extract.
  • the above-mentioned image processing method further includes: before determining at least one pixel to be selected on the target to be tracked based on the current pixel on the target to be tracked of the image to be processed, Determine whether the current pixel is located at the intersection between multiple branches on the target to be tracked, if yes, select one of the multiple branches, and select from the selected pixels on the branch The pixel to be selected.
  • the embodiment of the present application can Realize the pixel tracking of the branch of the target to be tracked.
  • the selecting one of the multiple branches includes:
  • one branch is selected from the plurality of branches.
  • one branch can be selected from the multiple branches according to the evaluation value of the multiple branches, that is, the crossing point can be selected accurately and reasonably Branch road.
  • the selecting a branch from the plurality of branches according to the evaluation value of each branch of the plurality of branches includes:
  • the branch with the highest evaluation value is selected.
  • the selected branch is the branch with the highest evaluation value, and the evaluation value of the branch is based on the true value of the target to be tracked. Therefore, the selected branch is more accurate.
  • the above-mentioned image processing method further includes:
  • the reselecting a branch without pixel tracking includes:
  • one branch is selected from the branches without pixel tracking.
  • the target to be tracked is the intersection of pixel tracking, and the evaluation value of each branch that has not been tracked can be selected from each branch that has not been tracked.
  • Choose a branch that is, the branch that can accurately and reasonably select the intersection.
  • the selecting a branch from the branches without pixel tracking according to the evaluation value of each branch without pixel tracking includes:
  • the branch with the highest evaluation value is selected.
  • the selected branch is the branch with the highest evaluation value among the branches without pixel tracking, and the evaluation value of the branch is based on the true value of the target to be tracked. Therefore, the selected branch is more accurate.
  • the preset branch tracking stop condition includes at least one of the following:
  • the next pixel to be tracked is at the end of the predetermined target to be tracked
  • the spatial entropy value of the next pixel to be tracked is greater than the preset spatial entropy value
  • the angle of the tracked route obtained for N consecutive times is greater than the set angle threshold.
  • the angle of the tracked route obtained each time indicates the angle of the tracked route obtained twice, and the tracked route obtained each time indicates the pixels tracked twice adjacently.
  • the line between points; N is an integer greater than or equal to 2.
  • the end of the target to be tracked can be pre-marked.
  • the next pixel to be tracked is at the end of the predetermined target to be tracked, it means that the corresponding branch does not need to be pixel-tracked, and the corresponding branch can be stopped at this time.
  • Point tracking can improve the accuracy of pixel tracking; the spatial entropy of the pixel can indicate the instability of the pixel. The higher the spatial entropy of the pixel, the higher the instability of the pixel. It is not appropriate to continue pixel tracking.
  • the tracking the current pixel according to the evaluation value of the at least one to-be-selected pixel to obtain the next pixel of the current pixel includes:
  • a pixel with the highest evaluation value is selected; the selected pixel with the highest evaluation value is determined as the next pixel of the current pixel.
  • next pixel is the pixel with the highest evaluation value among the pixels to be selected, and the evaluation value of the pixel is based on the true value of the target to be tracked. Therefore, the next pixel obtained is more accurate .
  • the target to be tracked is a vascular tree.
  • the next pixel can be determined from the current pixel according to the evaluation value of the pixel to be selected, that is, the pixel tracking and tracking of the vascular tree can be accurately achieved. extract.
  • the embodiment of the present application also provides a neural network training method, including:
  • the sample image is input to the initial neural network, and the initial neural network is used to perform the following steps: based on the current pixel points on the target to be tracked in the image to be processed, at least one to be selected on the target to be tracked is determined Pixel; based on the current pixel and the at least one to-be-selected pixel, combined with the preset true value of the target to be tracked, the evaluation value of the at least one to-be-selected pixel is obtained; according to the at least An evaluation value of a pixel to be selected, tracking the current pixel to obtain the next pixel of the current pixel;
  • the next pixel when training the neural network, for the target to be tracked, the next pixel can be determined from the current pixel according to the evaluation value of the pixel to be selected, that is, it can be accurate
  • the pixel point tracking and extraction of the target to be tracked can be realized, so that the trained neural network can accurately realize the pixel point tracking and extraction of the target to be tracked.
  • An embodiment of the present application also provides an image processing device, the device includes: a first acquisition module and a first processing module, wherein:
  • the first acquisition module is configured to acquire the image to be processed
  • the first processing module is configured to determine at least one pixel to be selected on the target to be tracked based on the current pixel on the target to be tracked of the image to be processed; based on the current pixel and the at least A pixel to be selected is combined with the preset true value of the target to be tracked to obtain the evaluation value of the at least one pixel to be selected; according to the evaluation value of the at least one pixel to be selected, the current The pixel points are tracked to obtain the next pixel point of the current pixel point.
  • the next pixel can be determined from the current pixel according to the evaluation value of the pixel to be selected, that is, the pixel tracking of the target to be tracked can be accurately realized And extract.
  • the first processing module is further configured to determine at least one pixel to be selected on the target to be tracked based on the current pixel on the target to be tracked based on the image to be processed Before the point, determine whether the current pixel point is located at the intersection between the multiple branches on the target to be tracked, if yes, select one of the multiple branches, from the selected branch on the Select the to-be-selected pixel among pixels.
  • the embodiment of the present application can Realize the pixel tracking of the branch of the target to be tracked.
  • the first processing module is configured to obtain the true value of the preset target to be tracked based on the current pixel point and the pixel points of the multiple branches, An evaluation value of each branch of the plurality of branches; and selecting a branch from the plurality of branches according to the evaluation value of each branch of the plurality of branches.
  • one branch can be selected from the multiple branches according to the evaluation value of the multiple branches, that is, the crossing point can be selected accurately and reasonably Branch road.
  • the first processing module is configured to select the branch with the highest evaluation value among the multiple branches.
  • the selected branch is the branch with the highest evaluation value, and the evaluation value of the branch is based on the true value of the target to be tracked. Therefore, the selected branch is more accurate.
  • the first processing module is further configured to:
  • the first processing module is configured to combine the pre-processing module based on the intersection of the unfinished pixel tracking and the pixel of each branch of the intersection that has not been tracked. Set the true value of the target to be tracked to obtain the evaluation value of each branch that has not been tracked by the pixel; according to the evaluation value of each branch that has not been tracked by the pixel, from the each non-tracked pixel Select one of the branches.
  • the target to be tracked is the intersection of pixel tracking, and the evaluation value of each branch that has not been tracked can be selected from each branch that has not been tracked.
  • Choose a branch that is, the branch that can accurately and reasonably select the intersection.
  • the first processing module is configured to select the branch with the highest evaluation value among the branches without pixel tracking.
  • the selected branch is the branch with the highest evaluation value among the branches without pixel tracking, and the evaluation value of the branch is based on the true value of the target to be tracked. Therefore, the selected branch is more accurate.
  • the preset branch tracking stop condition includes at least one of the following:
  • the next pixel to be tracked is at the end of the predetermined target to be tracked
  • the spatial entropy value of the next pixel to be tracked is greater than the preset spatial entropy value
  • the angle of the tracked route obtained for N consecutive times is greater than the set angle threshold.
  • the angle of the tracked route obtained each time indicates the angle of the tracked route obtained twice, and the tracked route obtained each time indicates the pixels tracked twice adjacently.
  • the line between points; N is an integer greater than or equal to 2.
  • the end of the target to be tracked can be pre-marked.
  • the next pixel to be tracked is at the end of the predetermined target to be tracked, it means that the corresponding branch does not need to be pixel-tracked, and the corresponding branch can be stopped at this time.
  • Point tracking can improve the accuracy of pixel tracking; the spatial entropy of the pixel can indicate the instability of the pixel. The higher the spatial entropy of the pixel, the higher the instability of the pixel. It is not appropriate to continue pixel tracking.
  • the first processing module is configured to select a pixel with the highest evaluation value from the at least one to-be-selected pixel; and determine the selected pixel with the highest evaluation value as the selected pixel with the highest evaluation value. Describe the next pixel of the current pixel.
  • next pixel is the pixel with the highest evaluation value among the pixels to be selected, and the evaluation value of the pixel is based on the true value of the target to be tracked. Therefore, the next pixel obtained is more accurate .
  • the target to be tracked is a vascular tree.
  • the next pixel can be determined from the current pixel according to the evaluation value of the pixel to be selected, that is, the pixel tracking and tracking of the vascular tree can be accurately achieved. extract.
  • the embodiment of the present application also provides a neural network training device, the device includes: a second acquisition module, a second processing module, an adjustment module, and a third processing module, wherein,
  • the second acquisition module is configured to acquire a sample image
  • the second processing module is configured to input the sample image into an untrained initial neural network, and use the initial neural network to perform the following steps: determine based on the current pixel points on the target to be tracked in the image to be processed At least one to-be-selected pixel on the target to be tracked; based on the current pixel and the at least one to-be-selected pixel, combined with a preset true value of the target to be tracked, to obtain the at least one to-be-selected The evaluation value of the pixel; according to the evaluation value of the at least one to-be-selected pixel, tracking the current pixel to obtain the next pixel of the current pixel;
  • An adjustment module configured to adjust the network parameter value of the initial neural network according to each pixel point obtained by tracking and a preset true value of the target to be tracked;
  • the third processing module is configured to repeatedly execute the steps of acquiring the sample image, processing the sample image using the initial neural network, and adjusting the network parameter value of the initial neural network until adjustment based on the network parameter value Each pixel obtained by the initial neural network meets the preset accuracy requirements, and the trained neural network is obtained.
  • the embodiment of the present application also provides an electronic device, including a processor and a memory configured to store a computer program that can run on the processor; wherein,
  • the processor When the processor is configured to run the computer program, it executes any one of the above-mentioned image processing methods or any one of the above-mentioned neural network training methods.
  • the embodiments of the present application also provide a computer storage medium on which a computer program is stored, and when the computer program is executed by a processor, any one of the above-mentioned image processing methods or any one of the above-mentioned neural network training methods is implemented.
  • the embodiment of the present application also provides a computer program, including computer-readable code, when the computer-readable code runs in an electronic device, the processor in the electronic device executes any one of the above-mentioned image processing Method or any of the above neural network training methods.
  • the image to be processed is acquired; based on the current pixel points on the blood vessel tree of the image to be processed, the location is determined At least one pixel to be selected on the blood vessel tree; based on the current pixel and the at least one pixel to be selected, combined with a preset truth value of the blood vessel tree to obtain the value of the at least one pixel to be selected Evaluation value; according to the evaluation value of the at least one pixel to be selected, the current pixel is tracked to obtain the next pixel of the current pixel.
  • the next pixel can be determined from the current pixel according to the evaluation value of the pixel to be selected, that is, the pixel tracking and extraction of the target to be tracked can be accurately realized .
  • FIG. 1A is a flowchart of an image processing method according to an embodiment of the application.
  • FIG. 1B is a schematic diagram of an application scenario of an embodiment of the application.
  • Fig. 2 is a flowchart of a neural network training method according to an embodiment of the application
  • FIG. 3 is a schematic diagram of the composition structure of an image processing device according to an embodiment of the application.
  • FIG. 4 is a schematic diagram of the composition structure of a neural network training device according to an embodiment of the application.
  • FIG. 5 is a schematic structural diagram of an electronic device according to an embodiment of the application.
  • the terms "including”, “including” or any other variants thereof are intended to cover non-exclusive inclusion, so that a method or device including a series of elements not only includes what is clearly stated Elements, and also include other elements not explicitly listed, or elements inherent to the implementation of the method or device. Without more restrictions, the element defined by the sentence “including a" does not exclude the existence of other related elements in the method or device that includes the element (such as steps or steps in the method).
  • the unit in the device for example, the unit may be a part of a circuit, a part of a processor, a part of a program or software, etc.).
  • the image processing and neural network training methods provided in the embodiments of this application include a series of steps, but the image processing and neural network training methods provided in the embodiments of this application are not limited to the recorded steps.
  • the embodiments of this application The provided image processing and neural network training device includes a series of modules, but the device provided in the embodiments of the present application is not limited to include the explicitly recorded modules, and may also include settings required for obtaining relevant information or processing based on information Module.
  • the embodiments of the present application can be applied to a computer system composed of a terminal and a server, and can be operated with many other general-purpose or special-purpose computing system environments or configurations.
  • the terminal can be a thin client, a thick client, a handheld or laptop device, a microprocessor-based system, a set-top box, a programmable consumer electronic product, a network personal computer, a small computer system, etc.
  • the server can be a server computer System small computer system, large computer system and distributed cloud computing technology environment including any of the above systems, etc.
  • Electronic devices such as terminals and servers can be described in the general context of computer system executable instructions (such as program modules) executed by a computer system.
  • program modules may include routines, programs, object programs, components, logic, data structures, etc., which perform specific tasks or implement specific abstract data types.
  • the computer system/server can be implemented in a distributed cloud computing environment. In the distributed cloud computing environment, tasks are executed by remote processing equipment linked through a communication network.
  • program modules may be located on a storage medium of a local or remote computing system including a storage device.
  • the deep reinforcement learning (Deep Reinforcement Learning, DRL) method produced by the combination of the two has achieved important results in the fields of artificial intelligence and robotics in recent years; exemplary;
  • the DRL method can be used to extract the blood vessel centerline.
  • the blood vessel centerline extraction task can be constructed as a sequential decision model to use the DRL model for training and learning.
  • the above-mentioned method for extracting the blood vessel centerline is limited to a single blood vessel.
  • the simple structure model cannot handle more complex tree-like structures such as the coronary arteries of the heart and the blood vessels of the brain.
  • an image processing method is proposed.
  • FIG. 1A is a flowchart of an image processing method according to an embodiment of the application. As shown in FIG. 1A, the flow may include:
  • Step 101 Obtain an image to be processed.
  • the image to be processed may be an image including the target to be tracked, and the target to be tracked may include multiple branches.
  • the target to be tracked is a blood vessel tree, which represents a blood vessel having a tree-like structure, and the tree-like blood vessel includes at least one bifurcation point; in some embodiments of the present application, the tree-like blood vessel may be Cardiac coronary arteries, cranial blood vessels, etc.; the image to be processed can be a three-dimensional medical image or other images containing tree-like blood vessels.
  • a three-dimensional image including the coronary arteries of the heart can be obtained based on the coronary angiography of the heart .
  • Step 102 Determine at least one pixel to be selected on the target to be tracked based on the current pixel on the target to be tracked in the image to be processed.
  • the current pixel on the target to be tracked can be any pixel of the target to be tracked.
  • the current pixel on the blood vessel tree can represent Any point of the vascular tree.
  • the current pixel on the vascular tree may be a pixel on the centerline of the vascular tree or other pixels on the vascular tree. This embodiment of the application does not perform this limited.
  • At least one pixel to be selected on the target to be tracked may be a pixel adjacent to the current pixel. Therefore, after determining the current pixel on the target to be tracked in the image to be processed, the current pixel on the target to be tracked can be determined according to The position lawsuit of the pixel points determines at least one pixel point to be selected on the target to be tracked.
  • the pre-obtained structural information of the target to be tracked can be used to determine the connection trend of the local pixel point where the current pixel is located, and then the shape and size information of the target to be tracked can be combined to calculate At least one pixel to be selected is displayed.
  • Step 103 Based on the current pixel and at least one pixel to be selected, combined with the preset true value of the target to be tracked, an evaluation value of at least one pixel to be selected is obtained.
  • the preset true value of the target to be tracked may indicate the pre-marked connection of pixels on the target to be tracked, and the connection of pixel points may indicate the path structure information of the target to be tracked.
  • the pixel connection representing the path of the target to be tracked can be manually marked; in some embodiments of the present application, when the target to be tracked is a vascular tree, it can be marked
  • the center line of the blood vessel tree, the marked center line of the blood vessel tree is taken as the true value of the blood vessel tree; it should be noted that the above is only an exemplary description of the true value of the target to be tracked, and the embodiment of the present application is not limited to this.
  • the evaluation value of the pixel to be selected may indicate the suitability of the pixel to be selected as the next pixel of the current pixel. In actual implementation, it can be based on the preset truth of the target to be tracked. Value, determine the suitability of each pixel to be selected as the next pixel. The higher the suitability of the pixel to be selected as the next pixel, the higher the evaluation value of the pixel to be selected; in this application. In some embodiments, when the pixel to be selected is determined as the next pixel, the degree of matching between the current pixel to the next pixel and the preset true value of the target to be tracked, the higher the degree of matching, Then the evaluation value of the pixel to be selected is higher.
  • Step 104 Track the current pixel according to the evaluation value of at least one to-be-selected pixel to obtain the next pixel.
  • the pixel with the highest evaluation value can be selected from at least one pixel to be selected; the selected pixel with the highest evaluation value is determined as the next pixel.
  • next pixel is the pixel with the highest evaluation value among the pixels to be selected, and the evaluation value of the pixel is based on the true value of the target to be tracked. Therefore, the next pixel obtained is more accurate .
  • the current pixel is constantly changing.
  • the pixel can be tracked from the starting point of the target to be tracked; that is, the starting point of the target to be tracked is taken as the current pixel Point, the next pixel point is obtained through pixel point tracking; then the tracked pixel point is used as the current pixel point to continue pixel point tracking; in this way, by repeating step 102 to step 104, the pixel point of the target to be tracked can be extracted line.
  • the starting point of the target to be tracked may be predetermined, and the starting point of the target to be tracked may be the pixel point of the entrance of the target to be tracked or other pixels of the target to be tracked; in some embodiments of the present application,
  • the target to be tracked is a vascular tree
  • the starting point of the vascular tree may be other pixels at the entrance of the vascular tree.
  • the vascular tree is a coronary artery
  • the starting point of the vascular tree may be the heart The pixel point of the entrance of the coronary artery.
  • the target to be tracked is a blood vessel tree
  • the starting point of the blood vessel tree may be the center point of the entrance of the blood vessel tree
  • the center line of the blood vessel tree can be extracted through the above-mentioned pixel tracking process.
  • the starting point of the target to be tracked can be determined according to the position information of the starting point of the target to be tracked input by the user, or the trained neural network that determines the starting point of the target to be tracked can be used to process the image to be processed , Get the position of the starting point of the target to be tracked.
  • the network structure of the neural network used to determine the starting point of the target to be tracked is not limited.
  • steps 101 to 104 can be implemented based on the processor of the image processing device.
  • the image processing device described above can be User Equipment (UE), mobile equipment, user terminals, terminals, cellular phones, cordless phones, and personal computers.
  • Digital processing Personal Digital Assistant, PDA
  • handheld devices computing devices, vehicle-mounted devices, wearable devices, etc.
  • the above-mentioned processors can be Application Specific Integrated Circuits (ASICs), Digital Signal Processors (Digital Signal Processors), etc.
  • ASICs Application Specific Integrated Circuits
  • Digital Signal Processors Digital Signal Processors
  • DSP digital signal processing device
  • DSPD Digital Signal Processing Device
  • PLD programmable logic device
  • Field Programmable Gate Array Field Programmable Gate Array
  • FPGA Field Programmable Gate Array
  • CPU Central Processing Unit
  • the next pixel can be determined from the current pixel according to the evaluation value of the pixel to be selected, that is, the pixel tracking of the target to be tracked can be accurately realized And extract.
  • the current pixel before at least one pixel to be selected on the target to be tracked is determined based on the current pixel on the target to be tracked of the image to be processed, the current pixel can also be determined Whether it is located at the intersection between multiple branches on the target to be tracked, if the current pixel is located at the intersection between multiple branches on the target to be tracked, select a branch of the multiple branches, and start from the selected branch.
  • the pixel to be selected is selected from the pixels on the road, that is, the pixel of the selected branch is tracked.
  • the selected branch after selecting one branch of the multiple branches, the selected branch can be selected.
  • Step 102 to step 104 are executed to realize the pixel tracking of the selected branch. If the current pixel is not located at the intersection between multiple branches on the target to be tracked, step 102 to step 104 are directly executed, and the next pixel of the current pixel is determined as the current pixel.
  • a binary classification neural network may be used to determine whether the current pixel is located at the intersection between multiple branches on the target to be tracked.
  • the network structure of the two-class neural network is not limited, as long as the two-class neural network can determine whether the current pixel is located at the intersection between multiple branches on the target to be tracked; for example, the two-class neural network
  • the network structure of the neural network can be a convolutional neural network (Convolutional Neural Networks, CNN) and so on.
  • each branch corresponding to each intersection is not tracked by pixels. Therefore, one branch of the intersection can be arbitrarily selected among the branches of each branch.
  • each of the multiple branches can be obtained.
  • the evaluation value of each branch; according to the evaluation value of each branch in the multiple branches, a branch is selected from the multiple branches.
  • next pixel to be selected may be determined in each of the above-mentioned multiple branches, and further, the evaluation value of the next pixel may be used as the evaluation value of the corresponding branch.
  • one branch can be selected from the multiple branches according to the evaluation value of the multiple branches, that is, the crossing point can be selected accurately and reasonably Branch road.
  • the highest evaluation value may be selected among the multiple branches. Of a branch.
  • the selected branch is the branch with the highest evaluation value, and the evaluation value of the branch is based on the true value of the target to be tracked. Therefore, the selected branch is more accurate.
  • a new one is selected.
  • the branch for pixel tracking; the pixel tracking for the selected branch; the intersection of the unfinished pixel tracking has a branch without pixel tracking; in response to the absence of an unfinished pixel tracking intersection In the situation, determine that the pixel tracking of each branch of each intersection is completed.
  • the intersection can be added to the jump list to implement the pixel tracking process of the target to be tracked. Jump.
  • an intersection can be selected in the jump list, and then the selection is determined If there is a branch without pixel tracking at the intersection of, if it exists, reselect a branch without pixel tracking for the selected intersection, and perform pixel tracking on the selected branch; if If it does not exist, you can delete the intersection from the jump list.
  • reselecting a branch without pixel tracking for example, it can be based on the intersection of the incomplete pixel tracking and the pixel points of each branch without pixel tracking, combined with the preset
  • the evaluation value of each branch without pixel tracking is obtained; according to the evaluation value of each branch without pixel tracking, select a branch from each branch without pixel tracking road.
  • next pixel to be selected can be determined in each branch corresponding to the intersection without pixel tracking, and then the evaluation value of the next pixel can be used as the corresponding branch's evaluation value. Evaluation value.
  • the target to be tracked is the intersection of pixel tracking, and the evaluation value of each branch that has not been tracked can be selected from each branch that has not been tracked.
  • Choose a branch that is, the branch that can accurately and reasonably select the intersection.
  • each branch without pixel tracking For the implementation of selecting a branch from each branch without pixel tracking according to the evaluation value of each branch without pixel tracking, for example, it can be in each branch without pixel tracking , Select the branch with the highest evaluation value.
  • the selected branch is the branch with the highest evaluation value among the branches without pixel tracking, and the evaluation value of the branch is based on the true value of the target to be tracked. Therefore, the selected branch is more accurate.
  • the preset branch tracking stop condition may include at least one of the following:
  • the next pixel to be tracked is at the end of the predetermined target to be tracked
  • the spatial entropy value of the next pixel to be tracked is greater than the preset spatial entropy value
  • the angle of the tracked route obtained for N consecutive times is greater than the set angle threshold, and the angle of the tracked route obtained each time indicates the angle of the tracked route obtained twice, and the tracked route obtained each time indicates that the tracked route is tracked twice.
  • the connecting line between the pixels; N is an integer greater than or equal to 2.
  • N is a hyperparameter of the first neural network; the set angle threshold can be preset according to actual application requirements, for example, the set angle threshold is greater than 10 degrees.
  • the end of the target to be tracked can be pre-marked. When the next pixel to be tracked is at the end of the predetermined target to be tracked, it means that the corresponding branch does not need to be pixel-tracked, and the corresponding branch can be stopped at this time.
  • Point tracking can improve the accuracy of pixel tracking; the spatial entropy of the pixel can indicate the instability of the pixel. The higher the spatial entropy of the pixel, the higher the instability of the pixel. It is not appropriate to continue pixel tracking.
  • the main road and branch road of the target to be tracked can be tracked.
  • the main road of the target to be tracked can represent the route from the starting point of the target to be tracked to the first intersection point to be tracked;
  • the DRL method can also be used for pixel tracking.
  • a neural network with a DQN framework can be used to track the main road or each branch of the target to be tracked; for example, the algorithm used in the DQN framework can include at least one of the following: Double- DQN, Dueling-DQN, prioritized memory replay, noisy layer; after determining the next pixel, the network parameters of the neural network with the DQN framework can be updated according to the evaluation value of the next pixel.
  • the network structure of the neural network with the DQN framework is not limited.
  • the neural network with the DQN framework includes three layers of convolutional layers and two layers of fully connected layers for feature downsampling.
  • the above-mentioned neural network, binary neural network, or neural network with DQN framework for determining the starting point of the target to be tracked may adopt a shallow neural network or a deep neural network, which is used to determine
  • the neural network of the starting point of the target to be tracked, the two-class neural network, or the neural network with the DQN framework adopts a shallow neural network the speed and efficiency of data processing by the neural network can be improved.
  • the embodiments of the present application only the starting point of the target to be tracked needs to be determined, and then the above-mentioned image processing method can be used to complete the pixel tracking task of the target to be tracked;
  • the embodiment of the present application can automatically complete the pixel tracking task of the target to be tracked for the acquired image to be processed.
  • FIG. 1B is a schematic diagram of an application scenario of an embodiment of the application.
  • the coronary blood vessel map 21 of the heart is the image to be processed above.
  • the blood vessel map 21 of the coronary artery of the heart can be input to the image processing device 22
  • the image processing method described in the foregoing embodiment is used for processing, which can realize the tracking and extraction of the pixel points of the coronary blood vessel map of the heart.
  • FIG. 1B is only an exemplary scenario of an embodiment of the present application, and the present application does not limit specific application scenarios.
  • FIG. 2 is a flowchart of the neural network training method of an embodiment of the application. As shown in FIG. 2, the process may include:
  • Step 201 Obtain a sample image.
  • the sample image may be an image including the target to be tracked.
  • Step 202 Input the sample image to the initial neural network, and use the initial neural network to perform the following steps: determine at least one pixel to be selected on the target to be tracked based on the current pixel on the target to be tracked in the sample image; The pixel point and at least one to-be-selected pixel point are combined with the preset true value of the target to be tracked to obtain the evaluation value of at least one to-be-selected pixel point; according to the evaluation value of the at least one to-be-selected pixel point, the current pixel point Perform tracking to get the pixel point next to the current pixel point.
  • Step 203 Adjust the network parameter value of the initial neural network according to the tracked pixels and the preset true value of the target to be tracked.
  • the initial neural network loss can be obtained according to the tracked pixel dim line and the preset true value of the target to be tracked; the initial neural network loss can be adjusted according to the aforementioned initial neural network loss.
  • the network parameter value of the network in some embodiments of the present application, the network parameter value of the initial neural network is adjusted with the goal of reducing the loss of the initial neural network.
  • the true value of the target to be tracked can be marked on the labeling platform for neural network training.
  • Step 204 Determine whether each pixel obtained by the initial neural network adjusted based on the network parameter value meets the preset accuracy requirements, if not, re-execute steps 201 to 204; if yes, execute step 205.
  • the preset accuracy requirement may be determined according to the loss of the initial neural network; for example, the preset accuracy requirement may be: the loss of the initial neural network is less than the set loss.
  • the set loss can be preset according to actual application requirements.
  • Step 205 Use the initial neural network after adjusting the network parameter values as the trained neural network.
  • the trained neural network can be used to directly process the image to be processed, that is, each pixel of the target to be tracked in the image to be processed can be tracked, that is, through end-to-end training, it can be used to
  • the neural network for tracking pixels of the target to be tracked has strong portability.
  • steps 201 to 205 can be implemented by a processor in an electronic device.
  • the aforementioned processor can be at least ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, and microprocessor.
  • ASIC ASIC
  • DSP digital signal processor
  • DSPD DSPD
  • PLD PLD
  • FPGA field-programmable gate array
  • CPU controller
  • microcontroller microprocessor
  • the next pixel when training the neural network, for the target to be tracked, the next pixel can be determined from the current pixel according to the evaluation value of the pixel to be selected, that is, it can be accurate
  • the pixel point tracking and extraction of the target to be tracked can be realized, so that the trained neural network can accurately realize the pixel point tracking and extraction of the target to be tracked.
  • an initial neural network may also be used to perform the following steps: before determining at least one pixel to be selected on the target to be tracked based on the current pixel on the target to be tracked based on the sample image, It can also determine whether the current pixel is located at the intersection between multiple branches on the target to be tracked. If the current pixel is located at the intersection between multiple branches on the target to be tracked, select a branch of the multiple branches. , Select the pixel to be selected from the pixels on the selected branch, that is, track the pixel of the selected branch, specifically, after selecting a branch of multiple branches, you can select Step 102 to step 104 are executed to realize the pixel tracking of the selected branch. If the current pixel is not located at the intersection between multiple branches on the target to be tracked, step 102 to step 104 are directly executed, and the next pixel of the current pixel is determined as the current pixel.
  • the initial neural network may also be used to perform the following steps: in response to tracking the pixels of the selected branch, and determining that the preset branch tracking stop condition is satisfied, target the unfinished pixel At the intersection of point tracking, reselect a branch that has not been tracked by pixels; perform pixel tracking on the selected branch; the intersection that has not been tracked by pixels has a branch that has not been tracked by pixels; responding to There is no case where the pixel point tracking is not completed, and it is determined that the pixel point tracking of each branch of each intersection point is completed.
  • the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possibility.
  • the inner logic is determined.
  • an embodiment of the present application also proposes an image processing device.
  • FIG. 3 is a schematic diagram of the composition structure of an image processing apparatus according to an embodiment of the application.
  • the apparatus may include a first acquisition module 301 and a first processing module 302, wherein,
  • the first obtaining module 301 is configured to obtain an image to be processed
  • the first processing module 302 is configured to determine at least one pixel to be selected on the target to be tracked based on the current pixel on the target to be tracked in the image to be processed; based on the current pixel and the At least one to-be-selected pixel is combined with the preset true value of the target to be tracked to obtain the evaluation value of the at least one to-be-selected pixel; according to the evaluation value of the at least one to-be-selected pixel, the current To track the pixel points of the current pixel point to get the next pixel point of the current pixel point.
  • the first processing module 302 is further configured to determine at least one pixel to be selected on the target to be tracked based on the current pixel on the target to be tracked based on the image to be processed Previously, it was determined whether the current pixel point is located at the intersection between multiple branches on the target to be tracked, and if so, one branch of the multiple branches is selected, and the pixels on the selected branch are selected. Select the pixel to be selected in.
  • the first processing module 302 is configured to obtain the true value of the preset target to be tracked based on the current pixel point and the pixel points of the multiple branches, and in combination with the preset true value of the target to be tracked.
  • the evaluation value of each branch in the plurality of branches; and a branch is selected from the plurality of branches according to the evaluation value of each branch in the plurality of branches.
  • the first processing module 302 is configured to select the branch with the highest evaluation value among the multiple branches.
  • the first processing module 302 is further configured to:
  • the first processing module 302 is configured to combine a preset pixel point based on the intersection of the incomplete pixel tracking and the pixel of each branch of the intersection that has not been tracked. The true value of the target to be tracked to obtain the evaluation value of each branch that has not been tracked by the pixel; according to the evaluation value of each branch that has not been tracked by the pixel, the evaluation value of each branch that has not been tracked by the pixel is obtained Select a branch from the branch.
  • the first processing module 302 is configured to select the branch with the highest evaluation value among the branches without pixel tracking.
  • the preset branch tracking stop condition includes at least one of the following:
  • the next pixel to be tracked is at the end of the predetermined target to be tracked
  • the spatial entropy value of the next pixel to be tracked is greater than the preset spatial entropy value
  • the angle of the tracked route obtained for N consecutive times is greater than the set angle threshold.
  • the angle of the tracked route obtained each time indicates the angle of the tracked route obtained twice, and the tracked route obtained each time indicates the pixels tracked twice adjacently.
  • the line between points; N is an integer greater than or equal to 2.
  • the end of the target to be tracked can be pre-marked.
  • the next pixel to be tracked is at the end of the predetermined target to be tracked, it means that the corresponding branch does not need to be pixel-tracked, and the corresponding branch can be stopped at this time.
  • Point tracking can improve the accuracy of pixel tracking; the spatial entropy of the pixel can indicate the instability of the pixel. The higher the spatial entropy of the pixel, the higher the instability of the pixel. It is not appropriate to continue pixel tracking.
  • the first processing module 302 is configured to select a pixel with the highest evaluation value from the at least one to-be-selected pixel; and determine the selected pixel with the highest evaluation value as the The next pixel of the current pixel.
  • the target to be tracked is a vascular tree.
  • Both the first acquisition module 301 and the first processing module 302 can be implemented by a processor located in an electronic device.
  • the processor is an ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, or microprocessor. At least one of.
  • an embodiment of the present application also proposes a neural network training device.
  • FIG. 4 is a schematic diagram of the composition structure of a neural network training device according to an embodiment of the application. As shown in FIG. 4, the device may include a second acquisition module 401, a second processing module 402, an adjustment module 403, and a third processing module 404. ,
  • the second obtaining module 401 is configured to obtain sample images
  • the second processing module 402 is configured to input the sample image to an initial neural network, and use the initial neural network to perform the following steps: determine the to-be-tracked target based on the current pixel points on the to-be-tracked target of the sample image At least one to-be-selected pixel on the target; based on the current pixel and the at least one to-be-selected pixel, combined with the preset true value of the target to be tracked, to obtain the value of the at least one to-be-selected pixel Evaluation value; tracking the current pixel according to the evaluation value of the at least one to-be-selected pixel to obtain the next pixel of the current pixel;
  • the adjustment module 403 is configured to adjust the network parameter value of the initial neural network according to each pixel point obtained by tracking and a preset true value of the target to be tracked;
  • the third processing module 404 is configured to repeatedly execute the steps of acquiring the sample image, processing the sample image using the initial neural network, and adjusting the network parameter value of the initial neural network until the network parameter value is based on the network parameter value. Each pixel obtained by the adjusted initial neural network meets the preset accuracy requirements, and the trained neural network is obtained.
  • the second acquisition module 401, the second processing module 402, the adjustment module 403, and the third processing module 404 can all be implemented by a processor located in an electronic device.
  • the processors are ASIC, DSP, DSPD, PLD, FPGA, CPU, control At least one of a device, a microcontroller, and a microprocessor.
  • the functional modules in this embodiment may be integrated in one processing part, or each part may exist alone physically, or two or more parts may be integrated in one part.
  • the above-mentioned integrated part can be realized either in the form of hardware or in the form of software function modules.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this embodiment is essentially or It is said that the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes several instructions to enable a computer device (which can It is a personal computer, a server, or a network device, etc.) or a processor (processor) that executes all or part of the steps of the method described in this embodiment.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
  • the computer program instructions corresponding to an image processing method or a neural network training method in this embodiment can be stored on storage media such as optical disks, hard disks, and USB flash drives.
  • storage media such as optical disks, hard disks, and USB flash drives.
  • the embodiment of the present application also proposes a computer program, including computer-readable code, when the computer-readable code runs in an electronic device, the processor in the electronic device executes It is used to implement any image processing method or any neural network training method in the foregoing embodiments.
  • FIG. 5 shows an electronic device provided by an embodiment of the present application, which may include: a memory 501 and a processor 502; wherein,
  • the memory 501 is configured to store computer programs and data
  • the processor 502 is configured to execute a computer program stored in the memory to implement any image processing method or any neural network training method in the foregoing embodiments.
  • the aforementioned memory 501 may be a volatile memory (volatile memory), such as RAM; or a non-volatile memory (non-volatile memory), such as ROM, flash memory, or hard disk (Hard Disk). Drive, HDD) or Solid-State Drive (SSD); or a combination of the foregoing types of memories, and provide instructions and data to the processor 502.
  • volatile memory volatile memory
  • non-volatile memory non-volatile memory
  • ROM read-only memory
  • flash memory read-only memory
  • HDD hard disk
  • SSD Solid-State Drive
  • the aforementioned processor 502 may be at least one of ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, and microprocessor. It is understandable that for different augmented reality cloud platforms, the electronic devices used to implement the above-mentioned processor functions may also be other, which is not specifically limited in the embodiment of the present application.
  • the functions or modules contained in the apparatus provided in the embodiments of the present application can be used to execute the methods described in the above method embodiments.
  • the functions or modules contained in the apparatus provided in the embodiments of the present application can be used to execute the methods described in the above method embodiments.
  • the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions to make a terminal (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of the present application.
  • a terminal which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.
  • the embodiment of the application proposes an image processing and neural network training method, device, electronic device, and computer storage medium.
  • the image processing method includes: acquiring a to-be-processed image; and based on the current pixels on the to-be-tracked target of the to-be-processed image Point to determine at least one pixel to be selected on the target to be tracked; based on the current pixel and the at least one pixel to be selected, combined with a preset true value of the target to be tracked to obtain the An evaluation value of at least one to-be-selected pixel; tracking the current pixel according to the evaluation value of the at least one to-be-selected pixel to obtain the next pixel of the current pixel.
  • the next pixel can be determined from the current pixel according to the evaluation value of the pixel to be selected, that is, the pixel tracking and extraction of the target to be tracked can be accurately realized .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例提出了一种图像处理及神经网络训练方法、装置、电子设备和计算机存储介质,图像处理方法包括:获取待处理图像;基于所述待处理图像的待追踪目标上当前的像素点,确定出所述待追踪目标上至少一个待选的像素点;基于所述当前的像素点与所述至少一个待选的像素点,结合预设的待追踪目标的真值,得到所述至少一个待选的像素点的评价值;根据所述至少一个待选的像素点的评价值,对所述当前的像素点进行追踪,得到当前的像素点的下一像素点。

Description

图像处理及神经网络训练方法、装置、设备、介质和程序
相关申请的交叉引用
本申请基于申请号为201911050567.9、申请日为2019年10月31日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及影像分析技术,涉及但不限于一种图像处理及神经网络训练方法、装置、电子设备、计算机存储介质和计算机程序。
背景技术
在相关技术中,针对待追踪目标如血管树,进行像素点的提取有助于对待追踪目标进行进一步研究,例如,针对心脏冠脉、颅脑血管等较为复杂的血管,如何进行血管图像像素点的提取,逐渐称为研究的热点;然而,在相关技术中,如何实现对待追踪目标的像素点追踪和提取,是亟待解决的技术问题。
发明内容
本申请实施例期望提供一种图像处理及神经网络训练方法、装置、电子设备、计算机存储介质和计算机程序。
本申请实施例提供了一种图像处理方法,所述方法包括:
获取待处理图像;
基于所述待处理图像的待追踪目标上当前的像素点,确定出所述待追踪目标上至少一个待选的像素点;
基于所述当前的像素点与所述至少一个待选的像素点,结合预设的待追踪目标的真值,得到所述至少一个待选的像素点的评价值;
根据所述至少一个待选的像素点的评价值,对所述当前的像素点进行追踪,得到所述当前的像素点的下一像素点。
可以看出,本申请实施例中,针对待追踪目标,可以根据待选的像素点的评价值,从当前像素点确定出下一个像素点,即,能够准确地实现对待追踪目标的像素点追踪和提取。
在本申请的一些实施例中,上述图像处理方法还包括:在基于所述待处理图像的待追踪目标上当前的像素点,确定出所述待追踪目标上至少一个待选的像素点之前,判断所述当前的素点是否位于所述待追踪目标上多个分支之间的交叉点,若是,则选择所述多个分支中的一个支路,从选择的所述支路上的像素中选择所述待选的像素点。
可以看出,通过判断当前像素点是否位于待追踪目标上各分支之间的交叉点,可以实现各个支路的像素点追踪,也就是说,在待追踪目标具有分支时,本申请实施例可以实现对待追踪目标的支路的像素点追踪。
在本申请的一些实施例中,所述选择所述多个分支中的一个支路,包括:
基于所述当前的像素点和所述多个支路的像素点,结合所述预设的待追踪目标的真值,得到所述多个支中每个支路的评价值;
根据所述多个支中每个支路的评价值,从所述多个支路中选择一个支路。
可以看出,本申请实施例中,针对待追踪目标的交叉点,可以根据多个支路的评价值,从多个支路中选择一个支路,即,能够准确且合理地选择交叉点的支路。
在本申请的一些实施例中,所述根据所述多个支中每个支路的评价值,从所述多个 支路中选择一个支路,包括:
在所述多个支路中,选择评价值最高的一个支路。
可以看出,选择的支路为评价值最高的支路,而支路的评价值是根据待追踪目标的真值得出的,因而,选择的支路更加准确。
在本申请的一些实施例中,上述图像处理方法还包括:
响应于对选择的支路的像素点进行追踪,且确定满足预设的支路追踪停止条件的情况,针对未完成像素点追踪的交叉点,重新选择一个未进行像素点追踪的支路,对选择的支路进行像素点追踪;所述未完成像素点追踪的交叉点具有未进行像素点追踪的支路;
响应于不存在未完成像素点追踪的交叉点的情况,确定各个交叉点的各个支路的像素点追踪完成。
可以看出,通过对各个交叉点的各个支路进行像素点追踪,可以实现对整个待追踪目标的像素点追踪任务。
在本申请的一些实施例中,所述重新选择一个未进行像素点追踪的支路,包括:
基于所述未完成像素点追踪的交叉点和所述交叉点的各个未进行像素点追踪的支路的像素点,结合预设的待追踪目标的真值,得到所述各个未进行像素点追踪的支路的评价值;
根据所述各个未进行像素点追踪的支路的评价值,从所述各个未进行像素点追踪的支路中选择一个支路。
可以看出,本申请实施例中,针对待追踪目标的为进行像素点追踪的交叉点,可以根据未进行像素点追踪的各个支路的评价值,从各个未进行像素点追踪的支路中选择一个支路,即,能够准确且合理地选择交叉点的支路。
在本申请的一些实施例中,所述根据所述各个未进行像素点追踪的支路的评价值,从所述各个未进行像素点追踪的支路中选择一个支路,包括:
在所述各个未进行像素点追踪的支路中,选择评价值最高的一个支路。
可以看出,选择的支路为未进行像素点追踪的各个支路中评价值最高的支路,而支路的评价值是根据待追踪目标的真值得出的,因而,选择的支路更加准确。
在本申请的一些实施例中,所述预设的支路追踪停止条件包括以下至少之一:
追踪到的下一个像素点处于预先确定的待追踪目标的末端;
追踪到的下一个像素点的空间熵值大于预设空间熵值;
连续N次得到的追踪路线夹角大于设定角度阈值,每次得到追踪路线夹角表示相邻两次得到的追踪路线的夹角,每次得到的追踪路线表示相邻两次追踪到的像素点之间的连线;N为大于或等于2的整数。
待追踪目标的末端可以预先标注,在追踪到的下一个像素点处于预先确定的待追踪目标的末端时,说明对应的支路无需再进行像素点追踪,此时可以停止对相应支路进行像素点追踪,可以提高像素点追踪的准确性;像素点的空间熵值可以表示像素点的不稳定性,像素点的空间熵值越高,说明像素点的不稳定性越高,在当前支路不合适继续进行像素点追踪,此时,可以跳转到交叉点继续进行像素点追踪,可以提高像素点追踪的准确性;连续N次得到的追踪路线夹角大于设定角度阈值时,说明最近几次得到的追踪路线的振荡幅度较大,因而,追踪到的像素点的准确性较低,此时,通过停止对相应支路进行像素点追踪,可以提高像素点追踪的准确性。
在本申请的一些实施例中,所述根据所述至少一个待选的像素点的评价值,对所述当前的像素点进行追踪,得到所述当前的像素点的下一像素点,包括:
从所述至少一个待选的像素点,选择评价值最高的像素点;将选择的所述评价值最高的像素点确定为所述当前像素点的下一个像素点。
可以看出,下一个像素点为待选的像素点中评价值最高的像素点,而像素点的评价 值是根据待追踪目标的真值得出的,因而,得出的下一个像素点更加准确。
在本申请的一些实施例中,所述待追踪目标为血管树。
可以看出,本申请实施例中,针对血管树,可以根据待选的像素点的评价值,从当前像素点确定出下一个像素点,即,能够准确地实现对血管树的像素点追踪和提取。
本申请实施例还提供了一种神经网络训练方法,包括:
获取样本图像;
将所述样本图像输入至初始神经网络,利用所述初始神经网络执行以下步骤:基于所述待处理图像的待追踪目标上当前的像素点,确定出所述待追踪目标上至少一个待选的像素点;基于所述当前的像素点与所述至少一个待选的像素点,结合预设的待追踪目标的真值,得到所述至少一个待选的像素点的评价值;根据所述至少一个待选的像素点的评价值,对所述当前的像素点进行追踪,得到所述当前的像素点的下一像素点;
根据追踪得到的各个像素点和预设的待追踪目标的真值,调整所述初始神经网络的网络参数值;
重复执行上述步骤,直至基于网络参数值调整后的初始神经网络得到的各个像素点满足预设的精度需求,得到训练完成的神经网络。
可以看出,在本申请实施例中,在对神经网络进行训练时,针对待追踪目标,可以根据待选的像素点的评价值,从当前像素点确定出下一个像素点,即,能够准确地实现对待追踪目标的像素点追踪和提取,如此,可以使得训练完成的神经网络准确地实现对待追踪目标的像素点追踪和提取。
本申请实施例还提供了一种图像处理装置,所述装置包括:第一获取模块和第一处理模块,其中,
第一获取模块,配置为获取待处理图像;
第一处理模块,配置为基于所述待处理图像的待追踪目标上当前的像素点,确定出所述待追踪目标上至少一个待选的像素点;基于所述当前的像素点与所述至少一个待选的像素点,结合预设的待追踪目标真值,得到所述至少一个待选的像素点的评价值;根据所述至少一个待选的像素点的评价值,对所述当前的像素点进行追踪,得到所述当前的像素点的下一像素点。
可以看出,本申请实施例中,针对待追踪目标,可以根据待选的像素点的评价值,从当前像素点确定出下一个像素点,即,能够准确地实现对待追踪目标的像素点追踪和提取。
在本申请的一些实施例中,所述第一处理模块,还配置为在基于所述待处理图像的待追踪目标上当前的像素点,确定出所述待追踪目标上至少一个待选的像素点之前,判断所述当前的像素点是否位于所述待追踪目标上多个分支之间的交叉点,若是,则选择所述多个分支中的一个支路,从选择的所述支路上的像素中选择所述待选的像素点。
可以看出,通过判断当前像素点是否位于待追踪目标上各分支之间的交叉点,可以实现各个支路的像素点追踪,也就是说,在待追踪目标具有分支时,本申请实施例可以实现对待追踪目标的支路的像素点追踪。
在本申请的一些实施例中,所述第一处理模块,配置为基于所述当前的像素点和所述多个支路的像素点,结合所述预设的待追踪目标的真值,得到所述多个支路中每个支路的评价值;根据所述多个支中每个支路的评价值,从所述多个支路中选择一个支路。
可以看出,本申请实施例中,针对待追踪目标的交叉点,可以根据多个支路的评价值,从多个支路中选择一个支路,即,能够准确且合理地选择交叉点的支路。
在本申请的一些实施例中,所述第一处理模块,配置为在所述多个支路中,选择评价值最高的一个支路。
可以看出,选择的支路为评价值最高的支路,而支路的评价值是根据待追踪目标的 真值得出的,因而,选择的支路更加准确。
在本申请的一些实施例中,所述第一处理模块,还配置为:
响应于对选择的支路的像素点进行追踪,且确定满足预设的支路追踪停止条件的情况,针对未完成像素点追踪的交叉点,重新选择一个未进行像素点追踪的支路;对选择的支路进行像素点追踪;所述未完成像素点追踪的交叉点具有未进行像素点追踪的支路;
响应于不存在未完成像素点追踪的交叉点的情况,确定各个交叉点的各个支路的像素点追踪完成。
可以看出,通过对各个交叉点的各个支路进行像素点追踪,可以实现对整个待追踪目标的像素点追踪任务。
在本申请的一些实施例中,所述第一处理模块,配置为基于所述未完成像素点追踪的交叉点和所述交叉点的各个未进行像素点追踪的支路的像素点,结合预设的待追踪目标的真值,得到所述各个未进行像素点追踪的支路的评价值;根据所述各个未进行像素点追踪的支路的评价值,从所述各个未进行像素点追踪的支路中选择一个支路。
可以看出,本申请实施例中,针对待追踪目标的为进行像素点追踪的交叉点,可以根据未进行像素点追踪的各个支路的评价值,从各个未进行像素点追踪的支路中选择一个支路,即,能够准确且合理地选择交叉点的支路。
在本申请的一些实施例中,所述第一处理模块,配置为在所述各个未进行像素点追踪的支路中,选择评价值最高的一个支路。
可以看出,选择的支路为未进行像素点追踪的各个支路中评价值最高的支路,而支路的评价值是根据待追踪目标的真值得出的,因而,选择的支路更加准确。
在本申请的一些实施例中,所述预设的支路追踪停止条件包括以下至少之一:
追踪到的下一个像素点处于预先确定的待追踪目标的末端;
追踪到的下一个像素点的空间熵值大于预设空间熵值;
连续N次得到的追踪路线夹角大于设定角度阈值,每次得到追踪路线夹角表示相邻两次得到的追踪路线的夹角,每次得到的追踪路线表示相邻两次追踪到的像素点之间的连线;N为大于或等于2的整数。
待追踪目标的末端可以预先标注,在追踪到的下一个像素点处于预先确定的待追踪目标的末端时,说明对应的支路无需再进行像素点追踪,此时可以停止对相应支路进行像素点追踪,可以提高像素点追踪的准确性;像素点的空间熵值可以表示像素点的不稳定性,像素点的空间熵值越高,说明像素点的不稳定性越高,在当前支路不合适继续进行像素点追踪,此时,可以跳转到交叉点继续进行像素点追踪,可以提高像素点追踪的准确性;连续N次得到的追踪路线夹角大于设定角度阈值时,说明最近几次得到的追踪路线的振荡幅度较大,因而,追踪到的像素点的准确性较低,此时,通过停止对相应支路进行像素点追踪,可以提高像素点追踪的准确性。
在本申请的一些实施例中,所述第一处理模块,配置为从所述至少一个待选的像素点,选择评价值最高的像素点;将选择的所述评价值最高的像素点确定为所述当前像素点的下一个像素点。
可以看出,下一个像素点为待选的像素点中评价值最高的像素点,而像素点的评价值是根据待追踪目标的真值得出的,因而,得出的下一个像素点更加准确。
在本申请的一些实施例中,所述待追踪目标为血管树。
可以看出,本申请实施例中,针对血管树,可以根据待选的像素点的评价值,从当前像素点确定出下一个像素点,即,能够准确地实现对血管树的像素点追踪和提取。
本申请实施例还提供了一种神经网络训练装置,所述装置包括:第二获取模块、第二处理模块、调整模块和第三处理模块,其中,
第二获取模块,配置为获取样本图像;
第二处理模块,配置为将所述样本图像输入至未经训练的初始神经网络,利用所述初始神经网络执行以下步骤:基于所述待处理图像的待追踪目标上当前的像素点,确定出所述待追踪目标上至少一个待选的像素点;基于所述当前的像素点与所述至少一个待选的像素点,结合预设的待追踪目标的真值,得到所述至少一个待选的像素点的评价值;根据所述至少一个待选的像素点的评价值,对所述当前的像素点进行追踪,得到所述当前的像素点的下一像素点;
调整模块,配置为根据追踪得到的各个像素点和预设的待追踪目标的真值,调整所述初始神经网络的网络参数值;
第三处理模块,配置为重复执行上述获取所述样本图像、利用所述初始神经网络对所述样本图像进行处理、以及调整所述初始神经网络的网络参数值的步骤,直至基于网络参数值调整后的初始神经网络得到的各个像素点满足预设的精度需求,得到训练完成的神经网络。
本申请实施例还提供了一种电子设备,包括处理器和配置为存储能够在处理器上运行的计算机程序的存储器;其中,
所述处理器配置为运行所述计算机程序时,执行上述任意一种图像处理方法或上述任意一种神经网络训练方法。
本申请实施例还提供了一种计算机存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述任意一种图像处理方法或上述任意一种神经网络训练方法。
本申请实施例还提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现上述任意一种图像处理方法或上述任意一种神经网络训练方法。
本申请实施例提出的图像处理及神经网络训练方法、装置、电子设备、计算机存储介质和计算机程序中,获取待处理图像;基于所述待处理图像的血管树上当前的像素点,确定出所述血管树上至少一个待选的像素点;基于所述当前的像素点与所述至少一个待选的像素点,结合预设的血管树真值,得到所述至少一个待选的像素点的评价值;根据所述至少一个待选的像素点的评价值,对所述当前的像素点进行追踪,得到当前的像素点的下一像素点。如此,本申请实施例中,针对待追踪目标,可以根据待选的像素点的评价值,从当前像素点确定出下一个像素点,即,能够准确地实现对待追踪目标的像素点追踪和提取。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本申请。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本申请的实施例,并与说明书一起用于说明本申请实施例的技术方案。
图1A为本申请实施例的图像处理方法的流程图;
图1B为本申请实施例的一个应用场景的示意图;
图2为本申请实施例的神经网络训练方法的流程图;
图3为本申请实施例的图像处理装置的组成结构示意图;
图4为本申请实施例的神经网络训练装置的组成结构示意图;
图5为本申请实施例的电子设备的结构示意图。
具体实施方式
以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所提供的实施例仅仅用以解释本申请,并不用于限定本申请。另外,以下所提供的实施例是用于 实施本申请的部分实施例,而非提供实施本申请的全部实施例,在不冲突的情况下,本申请实施例记载的技术方案可以任意组合的方式实施。
需要说明的是,在本申请实施例中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的方法或者装置不仅包括所明确记载的要素,而且还包括没有明确列出的其他要素,或者是还包括为实施方法或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个......”限定的要素,并不排除在包括该要素的方法或者装置中还存在另外的相关要素(例如方法中的步骤或者装置中的单元,例如的单元可以是部分电路、部分处理器、部分程序或软件等等)。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
例如,本申请实施例提供的图像处理及神经网络训练方法包含了一系列的步骤,但是本申请实施例提供的图像处理及神经网络训练方法不限于所记载的步骤,同样地,本申请实施例提供的图像处理及神经网络训练装置包括了一系列模块,但是本申请实施例提供的装置不限于包括所明确记载的模块,还可以包括为获取相关信息、或基于信息进行处理时所需要设置的模块。
本申请实施例可以应用于终端和服务器组成的计算机系统中,并可以与众多其它通用或专用计算系统环境或配置一起操作。这里,终端可以是瘦客户机、厚客户机、手持或膝上设备、基于微处理器的系统、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机系统,等等,服务器可以是服务器计算机系统小型计算机系统﹑大型计算机系统和包括上述任何系统的分布式云计算技术环境,等等。
终端、服务器等电子设备可以在由计算机系统执行的计算机系统可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑、数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机系统/服务器可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或远程计算系统存储介质上。
在相关技术中,随着深度学习以及强化学习研究的深入及推广,两者结合产生的深度强化学习(Deep Reinforcement Learning,DRL)方法近些年在人工智能、机器人领域等取得重要成果;示例性地,可以采用DRL方法对血管中心线进行提取,具体地,可以将血管中心线提取任务构造成顺序决策模型从而使用DRL模型进行训练学习,但是上述血管中心线的提取方法仅限于单根血管的简单结构模型,无法处理类似于心脏冠脉、颅脑血管等更为复杂的树状结构。
针对上述技术问题,在本申请的一些实施例中,提出了一种图像处理方法。
图1A为本申请实施例的图像处理方法的流程图,如图1A所示,该流程可以包括:
步骤101:获取待处理图像。
本申请实施例中,待处理图像可以是包括待追踪目标的图像,待追踪目标可以包括多个分支。在本申请的一些实施例中,待追踪目标为血管树,血管树表示具有树状结构的血管,树状血管至少包含一个分叉点;在本申请的一些实施例中,树状血管可以是心脏冠状动脉、颅脑血管等;待处理图像可以是三维医学影像或其它含有树状血管的图像,在本申请的一些实施例中,可以基于心脏冠状动脉造影,得到包括心脏冠状动脉的三维图像。
步骤102:基于待处理图像的待追踪目标上当前的像素点,确定出待追踪目标上至少 一个待选的像素点。
这里,待追踪目标上当前的像素点可以是待追踪目标的任意一个像素点,在本申请的一些实施例中,在待追踪目标为血管树的情况下,血管树上当前的像素点可以表示血管树的任意一点,在本申请的一些实施例中,血管树上当前的像素点可以是血管树中心线上的像素点或血管树上的其它像素点,本申请实施例对此并不进行限定。
本申请实施例中,待追踪目标上至少一个待选的像素点可以是与当前的像素点相邻的像素点,因而,在确定待处理图像的待追踪目标上当前的像素点后,可以根据像素点的位置官司,确定出待追踪目标上至少一个待选的像素点。
在本申请的一些实施例中,可以根据预先获取的待追踪目标的结构信息,确定当前像素点所在的局部的像素点连线趋势,然后,可以结合待追踪目标的具备形状和尺寸信息,计算出至少一个待选的像素点。
步骤103:基于当前的像素点与至少一个待选的像素点,结合预设的待追踪目标的真值,得到至少一个待选的像素点的评价值。
这里,预设的待追踪目标的真值可以表示预先标注的待追踪目标上的像素点连线,像素点连线可以表示待追踪目标的路径结构信息。在实际应用中,可以针对待追踪目标,通过人工手动的方式标注出表示待追踪目标路径的像素点连线;在本申请的一些实施例中,在待追踪目标为血管树时,可以标注出血管树的中心线,将标注出的血管树的中心线作为血管树的真值;需要说明的是,上述仅仅是对待追踪目标的真值进行了示例性说明,本申请实施例并不局限于此。
本申请实施例中,待选的像素点的评价值可以表示待选的像素点作为当前的像素点的下一个像素点的适合程度,在实际实施时,可以根据预设的待追踪目标的真值,判断每个待选的像素点作为下一个像素点的适合程度,待选的像素点作为下一个像素点的适合程度越高,则待选的像素点的评价值越高;在本申请的一些实施例中,可以确定待选的像素点作为下一个像素点时,当前像素点到下一个像素点的连线与预设的待追踪目标的真值的匹配程度,匹配程度越高,则待选的像素点的评价值越高。
步骤104:根据至少一个待选的像素点的评价值,对当前的像素点进行追踪,得到下一像素点。
对于本步骤的实现方式,示例性地,可以从至少一个待选的像素点,选择评价值最高的像素点;将选择的评价值最高的像素点确定为下一个像素点。
可以看出,下一个像素点为待选的像素点中评价值最高的像素点,而像素点的评价值是根据待追踪目标的真值得出的,因而,得出的下一个像素点更加准确。
在实际应用中,当前像素点是不断变化的,在本申请的一些实施例中,可以从待追踪目标的起始点开始,对像素点进行追踪;即,将待追踪目标的起始点作为当前像素点,通过像素点追踪得到下一个像素点;然后将追踪得到的像素点作为当前像素点继续进行像素点追踪;如此,通过重复执行步骤102至步骤104,可以提取出待追踪目标的像素点连线。
本申请实施例中,待追踪目标的起始点可以预先确定,待追踪目标的起始点可以是待追踪目标的入口的像素点或待追踪目标的其它像素点;在本申请的一些实施例中,在待追踪目标为血管树时,血管树的起始点可以是血管树入口的像素点的其它像素点,在一个具体的示例中,血管树为心脏冠状动脉时,血管树的起始点可以是心脏冠状动脉的入口的像素点。
在本申请的一些实施例中,在待追踪目标为血管树,且血管树的起始点可以是血管树入口的中心点时,通过上述像素点追踪过程,可以提取血管树的中心线。
在实际应用中,可以根据用户输入的待追踪目标的起始点的位置信息,确定待追踪目标的起始点,也可以利用训练完成的确定待追踪目标的起始点的神经网络,对待处理 图像进行处理,得到待追踪目标的起始点的位置。本申请实施例中,并不对用于确定待追踪目标的起始点的神经网络的网络结构进行限定。
在实际应用中,步骤101至步骤104可以基于图像处理装置的处理器实现,上述图像处理装置可以是用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字处理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等;上述处理器可以为特定用途集成电路(Application Specific Integrated Circuit,ASIC)、数字信号处理器(Digital Signal Processor,DSP)、数字信号处理装置(Digital Signal Processing Device,DSPD)、可编程逻辑装置(Programmable Logic Device,PLD)、现场可编程门阵列(Field Programmable Gate Array,FPGA)、中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器中的至少一种。可以理解地,对于不同的电子设备,用于实现上述处理器功能的电子器件还可以为其它,本申请实施例不作具体限定。
可以看出,本申请实施例中,针对待追踪目标,可以根据待选的像素点的评价值,从当前像素点确定出下一个像素点,即,能够准确地实现对待追踪目标的像素点追踪和提取。
在本申请的一些实施例中,在基于所述待处理图像的待追踪目标上当前的像素点,确定出所述待追踪目标上至少一个待选的像素点之前,还可以判断当前的像素点是否位于待追踪目标上多个分支之间的交叉点,如果当前的像素点位于待追踪目标上多个分支之间的交叉点,则选择多个分支的一个支路,从选择的所述支路上的像素中选择待选的像素点,即,对选择的支路的像素点进行追踪,在本申请的一些实施例中,在选择多个分支的一个支路后,可以针对选择的支路,执行步骤102至步骤104,实现对选择的支路的像素点追踪。如果当前的像素点不位于待追踪目标上多个分支之间的交叉点,则直接执行步骤102至步骤104,确定出当前像素点的下一个像素点作为当前像素点。
在本申请的一些实施例中,可以基于二分类神经网络判断当前的像素点是否位于待追踪目标上多个分支之间的交叉点。本申请实施例中,并不对二分类神经网络的网络结构进行限定,只要二分类神经网络能够判断当前的像素点是否位于待追踪目标上多个分支之间的交叉点即可;例如,二分类神经网络的网络结构可以是卷积神经网络(Convolutional Neural Networks,CNN)等。
可以看出,通过判断当前的像素点是否位于待追踪目标上多个分支之间的交叉点,可以实现多个支路的像素点追踪,也就是说,在待追踪目标具有分支时,本申请实施例可以实现对待追踪目标的支路的像素点追踪。
可以理解地,在初始时,每个交叉点对应的各个分支均没有进行像素点追踪,因而,可以在各个分支的支路中,任意选择交叉点的一个支路。
对于选择多个分支的一个支路的实现方式,示例性地,可以基于当前的像素点和多个支路的像素点,结合预设的待追踪目标的真值,得到多个支路中每个支路的评价值;根据多个支路中每个支路的评价值,从多个支路中选择一个支路。
在实际实施时,可以在上述多个支路中,各自确定出待选的下一个像素点,进而,可以将下一个像素点的评价值作为对应的支路的评价值。
可以看出,本申请实施例中,针对待追踪目标的交叉点,可以根据多个支路的评价值,从多个支路中选择一个支路,即,能够准确且合理地选择交叉点的支路。
对于根据所述多个支路中每个支路的评价值,从所述多个支路中选择一个支路的实现方式,示例性地,可以在上述多个支路中,选择评价值最高的一个支路。
可以看出,选择的支路为评价值最高的支路,而支路的评价值是根据待追踪目标的真值得出的,因而,选择的支路更加准确。
在本申请的一些实施例中,响应于对选择的支路的像素点进行追踪,且确定满足预 设的支路追踪停止条件的情况,针对未完成像素点追踪的交叉点,重新选择一个未进行像素点追踪的支路;对选择的支路进行像素点追踪;所述未完成像素点追踪的交叉点具有未进行像素点追踪的支路;响应于不存在未完成像素点追踪的交叉点的情况,确定各个交叉点的各个支路的像素点追踪完成。
在实际实施时,在确定当前的像素点位于所述待追踪目标上各分支之间的交叉点时,可以将交叉点加入跳转列表,用于实现待追踪目标的像素点追踪过程的像素点跳转。
在本申请的一些实施例中,在对选择的支路的像素点进行追踪,且确定满足预设的支路追踪停止条件的情况下,可以在跳转列表中选择一个交叉点,然后判断选择的交叉点是否存在对应的未进行像素点追踪的支路,如果存在,则针对该选择的交叉点,重新选择一个未进行像素点追踪的支路,对选择的支路进行像素点追踪;如果不存在,则可以将该交叉点从跳转列表中删除。
在跳转列表中不存在交叉点时,说明不存在未完成像素点追踪的交叉点,即各个交叉点的各个支路的像素点追踪完成。
可以看出,通过对各个交叉点的各个支路进行像素点追踪,可以实现对整个待追踪目标的像素点追踪任务。
对于重新选择一个未进行像素点追踪的支路的实现方式,示例性地,可以基于未完成像素点追踪的交叉点和交叉点的各个未进行像素点追踪的支路的像素点,结合预设的待追踪目标的真值,得到各个未进行像素点追踪的支路的评价值;根据各个未进行像素点追踪的支路的评价值,从各个未进行像素点追踪的支路中选择一个支路。
在实际实施时,可以在交叉点对应的各个未进行像素点追踪的支路中,各自确定出待选的下一个像素点,进而,可以将下一个像素点的评价值作为对应的支路的评价值。
可以看出,本申请实施例中,针对待追踪目标的为进行像素点追踪的交叉点,可以根据未进行像素点追踪的各个支路的评价值,从各个未进行像素点追踪的支路中选择一个支路,即,能够准确且合理地选择交叉点的支路。
对于根据各个未进行像素点追踪的支路的评价值,从各个未进行像素点追踪的支路中选择一个支路的实现方式,示例性地,可以在各个未进行像素点追踪的支路中,选择评价值最高的一个支路。
可以看出,选择的支路为未进行像素点追踪的各个支路中评价值最高的支路,而支路的评价值是根据待追踪目标的真值得出的,因而,选择的支路更加准确。
在本申请的一些实施例中,预设的支路追踪停止条件可以包括以下至少之一:
追踪到的下一个像素点处于预先确定的待追踪目标的末端;
追踪到的下一个像素点的空间熵值大于预设空间熵值;
或者,连续N次得到的追踪路线夹角大于设定角度阈值,每次得到追踪路线夹角表示相邻两次得到的追踪路线的夹角,每次得到的追踪路线表示相邻两次追踪到的像素点之间的连线;N为大于或等于2的整数。
这里,N为第一神经网络的超参数;设定角度阈值可以根据实际应用需求预先设置,例如,设定角度阈值大于10度。待追踪目标的末端可以预先标注,在追踪到的下一个像素点处于预先确定的待追踪目标的末端时,说明对应的支路无需再进行像素点追踪,此时可以停止对相应支路进行像素点追踪,可以提高像素点追踪的准确性;像素点的空间熵值可以表示像素点的不稳定性,像素点的空间熵值越高,说明像素点的不稳定性越高,在当前支路不合适继续进行像素点追踪,此时,可以跳转到交叉点继续进行像素点追踪,可以提高像素点追踪的准确性;连续N次得到的追踪路线夹角大于设定角度阈值时,说明最近几次得到的追踪路线的振荡幅度较大,因而,追踪到的像素点的准确性较低,此时,通过停止对相应支路进行像素点追踪,可以提高像素点追踪的准确性。
本申请实施例中,可以实现对待追踪目标的主路和支路的追踪,待追踪目标的主路 可以表示从待追踪目标的起始点至追踪到的第一个交叉点的路线;在对待追踪目标的主路或每个支路进行像素点追踪的情况下,还可以采用DRL方法进行像素点追踪。
在本申请的一些实施例中,可以利用具有DQN框架的神经网络对待追踪目标的主路或每个支路进行像素点追踪;例如,在DQN框架使用的算法可以包括以下至少一种:Double-DQN、Dueling-DQN、prioritized memory replay、noisy layer;在确定下一个像素点后,可以根据下一个像素点的评价值,更新具有DQN框架的神经网络的网络参数。
本申请实施例中,并不对具有DQN框架的神经网络的网络结构进行限定,例如,具有DQN框架的神经网络包括用于特征降采样的三层卷积层和两层全连接层。
在本申请的一些实施例中,上述用于确定待追踪目标的起始点的神经网络、二分类神经网络或具有DQN框架的神经网络可以采用浅层神经网络或深层神经网络,在上述用于确定待追踪目标的起始点的神经网络、二分类神经网络或具有DQN框架的神经网络采用浅层神经网络的情况下,可以提高神经网络处理数据的速度和效率。
综上,可以看出,本申请实施例中,只需确定待追踪目标的起始点,便可以利用上述图像处理方法完成整个待追踪目标的像素点追踪任务;进一步地,在利用确定待追踪目标的起始点的神经网络确定待追踪目标的起始点的情况下,本申请实施例可以自动地针对获取的待处理图像,完成整个待追踪目标的像素点追踪任务。
在本申请的一些实施例中,在得到包含心脏冠状动脉的待处理图像后,根据上述图像处理方法,仅需要5秒便可以直接从待处理图像中提取出单个心脏冠状动脉的中心线,提取出的单个心脏冠状动脉的中心线的用途包括但不限于:血管命名、结构展示等。
图1B为本申请实施例的一个应用场景的示意图,如图1B所示,心脏冠脉的血管图21为上述待处理图像,这里,可以将心脏冠脉的血管图21输入至图像处理装置22中;在图像处理装置22中,通过前述实施例记载的图像处理方法进行处理,可以实现对心脏冠脉的血管图的像素点的追踪和提取。需要说明的是,图1B所示的场景仅仅是本申请实施例的一个示例性场景,本申请对具体的应用场景不作限制。
在前述记载的内容的基础上,本申请实施例还提出了一种神经网络训练方法,图2为本申请实施例的神经网络训练方法的流程图,如图2所示,该流程可以包括:
步骤201:获取样本图像。
本申请实施例中,样本图像可以是包括待追踪目标的图像。
步骤202:将样本图像输入至初始神经网络,利用初始神经网络执行以下步骤:基于样本图像的待追踪目标上当前的像素点,确定出待追踪目标上至少一个待选的像素点;基于当前的像素点与至少一个待选的像素点,结合预设的待追踪目标真值,得到至少一个待选的像素点的评价值;根据至少一个待选的像素点的评价值,对当前的像素点进行追踪,得到当前的像素点的下一像素点。
本申请实施例中,初始神经网络执行的步骤的实现方式已经在前述记载的内容中作出说明,这里不再赘述。
步骤203:根据追踪得到的各个像素点和预设的待追踪目标的真值,调整初始神经网络的网络参数值:。
对于本步骤的实现方式,示例性地,可以根据追踪得到的各个像素点心线和预设的待追踪目标的真值,得出初始神经网络的损失;根据上述初始神经网络的损失,调整初始神经网络的网络参数值;在本申请的一些实施例中,以降低初始神经网络的损失为目标,调整初始神经网络的网络参数值。
在实际应用中,可以在标注平台标注出待追踪目标的真值,以用于神经网络的训练。
步骤204:判断基于网络参数值调整后的初始神经网络得到的各个像素点是否满足预设的精度需求,如果否,则重新执行步骤201至步骤204;如果是,则执行步骤205。
本申请实施例中,预设的精度需求可以根据初始神经网络的损失确定;例如,预设 的精度需求可以是:初始神经网络的损失小于设定损失。在实际应用中,设定损失可以根据实际应用需求预先设置。
步骤205:将网络参数值调整后的初始神经网络作为训练完成的神经网络。
本申请实施例中,可以利用训练完成的神经网络直接对待处理图像进行处理,即,可以追踪到待处理图像中待追踪目标的各个像素点,即,可以通过端到端的训练,得到用于针对待追踪目标追踪像素点的神经网络,可移植性强。
在实际应用中,步骤201至步骤205可以利用电子设备中的处理器实现,上述处理器可以为ASIC、DSP、DSPD、PLD、FPGA、CPU、控制器、微控制器、微处理器中的至少一种。
可以看出,在本申请实施例中,在对神经网络进行训练时,针对待追踪目标,可以根据待选的像素点的评价值,从当前像素点确定出下一个像素点,即,能够准确地实现对待追踪目标的像素点追踪和提取,如此,可以使得训练完成的神经网络准确地实现对待追踪目标的像素点追踪和提取。
在本申请的一些实施例中,还可以利用初始神经网络执行以下步骤:在基于样本图像的待追踪目标上当前的像素点,确定出所述待追踪目标上至少一个待选的像素点之前,还可以判断当前的像素点是否位于待追踪目标上多个分支之间的交叉点,如果当前的像素点位于待追踪目标上多个分支之间的交叉点,则选择多个分支的一个支路,从选择的所述支路上的像素中选择所述待选的像素点,即,对选择的支路的像素点进行追踪,具体地,在选择多个分支的一个支路后,可以针对选择的支路,执行步骤102至步骤104,实现对选择的支路的像素点追踪。如果当前的像素点不位于待追踪目标上多个分支之间的交叉点,则直接执行步骤102至步骤104,确定出当前像素点的下一个像素点作为当前像素点。
在本申请的一些实施例中,还可以利用初始神经网络执行以下步骤:响应于对选择的支路的像素点进行追踪,且确定满足预设的支路追踪停止条件的情况,针对未完成像素点追踪的交叉点,重新选择一个未进行像素点追踪的支路;对选择的支路进行像素点追踪;所述未完成像素点追踪的交叉点具有未进行像素点追踪的支路;响应于不存在未完成像素点追踪的交叉点的情况,确定各个交叉点的各个支路的像素点追踪完成。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
在前述实施例提出的图像处理方法的基础上,本申请实施例还提出了一种图像处理装置。
图3为本申请实施例的图像处理装置的组成结构示意图,如图3所示,该装置可以包括第一获取模块301和第一处理模块302,其中,
第一获取模块301,配置为获取待处理图像;
第一处理模块302,配置为基于所述待处理图像的待追踪目标上当前的像素点,确定出所述待追踪目标上至少一个待选的像素点;基于所述当前的像素点与所述至少一个待选的像素点,结合预设的待追踪目标真值,得到所述至少一个待选的像素点的评价值;根据所述至少一个待选的像素点的评价值,对所述当前的像素点进行追踪,得到当前的像素点的下一像素点。
在本申请的一些实施例中,第一处理模块302,还配置为在基于所述待处理图像的待追踪目标上当前的像素点,确定出所述待追踪目标上至少一个待选的像素点之前,判断所述当前的像素点是否位于所述待追踪目标上多个分支之间的交叉点,若是,则选择所述多个分支中的一个支路,从选择的所述支路上的像素中选择所述待选的像素点。
在本申请的一些实施例中,第一处理模块302,配置为基于所述当前的像素点和所述 多个支路的像素点,结合所述预设的待追踪目标的真值,得到所述多个支路中每个支路的评价值;根据所述多个支中每个支路的评价值,从所述多个支路中选择一个支路。
在本申请的一些实施例中,第一处理模块302,配置为在所述多个支路中,选择评价值最高的一个支路。
在本申请的一些实施例中,第一处理模块302还配置为:
响应于对选择的支路的像素点进行追踪,且确定满足预设的支路追踪停止条件的情况,针对未完成像素点追踪的交叉点,重新选择一个未进行像素点追踪的支路;对选择的支路进行像素点追踪;所述未完成像素点追踪的交叉点具有未进行像素点追踪的支路;
响应于不存在未完成像素点追踪的交叉点的情况,确定各个交叉点的各个支路的像素点追踪完成。
在本申请的一些实施例中,第一处理模块302,配置为基于所述未完成像素点追踪的交叉点和所述交叉点的各个未进行像素点追踪的支路的像素点,结合预设的待追踪目标的真值,得到所述各个未进行像素点追踪的支路的评价值;根据所述各个未进行像素点追踪的支路的评价值,从所述各个未进行像素点追踪的支路中选择一个支路。
在本申请的一些实施例中,第一处理模块302,配置为在所述各个未进行像素点追踪的支路中,选择评价值最高的一个支路。
在本申请的一些实施例中,所述预设的支路追踪停止条件包括以下至少之一:
追踪到的下一个像素点处于预先确定的待追踪目标的末端;
追踪到的下一个像素点的空间熵值大于预设空间熵值;
连续N次得到的追踪路线夹角大于设定角度阈值,每次得到追踪路线夹角表示相邻两次得到的追踪路线的夹角,每次得到的追踪路线表示相邻两次追踪到的像素点之间的连线;N为大于或等于2的整数。
待追踪目标的末端可以预先标注,在追踪到的下一个像素点处于预先确定的待追踪目标的末端时,说明对应的支路无需再进行像素点追踪,此时可以停止对相应支路进行像素点追踪,可以提高像素点追踪的准确性;像素点的空间熵值可以表示像素点的不稳定性,像素点的空间熵值越高,说明像素点的不稳定性越高,在当前支路不合适继续进行像素点追踪,此时,可以跳转到交叉点继续进行像素点追踪,可以提高像素点追踪的准确性;连续N次得到的追踪路线夹角大于设定角度阈值时,说明最近几次得到的追踪路线的振荡幅度较大,因而,追踪到的像素点的准确性较低,此时,通过停止对相应支路进行像素点追踪,可以提高像素点追踪的准确性。
在本申请的一些实施例中,第一处理模块302,配置为从所述至少一个待选的像素点,选择评价值最高的像素点;将选择的所述评价值最高的像素点确定为所述当前像素点的下一个像素点。
在本申请的一些实施例中,待追踪目标为血管树。
上述第一获取模块301和第一处理模块302均可由位于电子设备中的处理器实现,上述处理器为ASIC、DSP、DSPD、PLD、FPGA、CPU、控制器、微控制器、微处理器中的至少一种。
在前述实施例提出的神经网络训练方法的基础上,本申请实施例还提出了一种神经网络训练装置。
图4为本申请实施例的神经网络训练装置的组成结构示意图,如图4所示,该装置可以包括第二获取模块401、第二处理模块402、调整模块403和第三处理模块404,其中,
第二获取模块401,配置为获取样本图像;
第二处理模块402,配置为将所述样本图像输入至初始神经网络,利用所述初始神经网络执行以下步骤:基于所述样本图像的待追踪目标上当前的像素点,确定出所述待追 踪目标上至少一个待选的像素点;基于所述当前的像素点与所述至少一个待选的像素点,结合预设的待追踪目标的真值,得到所述至少一个待选的像素点的评价值;根据所述至少一个待选的像素点的评价值,对所述当前的像素点进行追踪,得到当前像素点的下一像素点;
调整模块403,配置为根据追踪得到的各个像素点和预设的待追踪目标的真值,调整所述初始神经网络的网络参数值;
第三处理模块404,配置为重复执行上述获取所述样本图像、利用所述初始神经网络对所述样本图像进行处理、以及调整所述初始神经网络的网络参数值的步骤,直至基于网络参数值调整后的初始神经网络得到的各个像素点满足预设的精度需求,得到训练完成的神经网络。
上述第二获取模块401、第二处理模块402、调整模块403和第三处理模块404均可由位于电子设备中的处理器实现,上述处理器为ASIC、DSP、DSPD、PLD、FPGA、CPU、控制器、微控制器、微处理器中的至少一种。
另外,在本实施例中的各功能模块可以集成在一个处理部分中,也可以是各个部分单独物理存在,也可以两个或两个以上部分集成在一个部分中。上述集成的部分既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
具体来讲,本实施例中的一种图像处理方法或一种神经网络训练方法对应的计算机程序指令可以被存储在光盘,硬盘,U盘等存储介质上,当存储介质中的与一种图像处理方法或一种神经网络训练方法对应的计算机程序指令被一电子设备读取或被执行时,实现前述实施例的任意一种图像处理方法或任意一种神经网络训练方法。
基于前述实施例相同的技术构思,本申请实施例还提出了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现前述实施例的任意一种图像处理方法或任意一种神经网络训练方法。
基于前述实施例相同的技术构思,参见图5,其示出了本申请实施例提供的一种电子设备,可以包括:存储器501和处理器502;其中,
所述存储器501,配置为存储计算机程序和数据;
所述处理器502,配置为执行所述存储器中存储的计算机程序,以实现前述实施例的任意一种图像处理方法或任意一种神经网络训练方法。
在实际应用中,上述存储器501可以是易失性存储器(volatile memory),例如RAM;或者非易失性存储器(non-volatile memory),例如ROM,快闪存储器(flash memory),硬盘(Hard Disk Drive,HDD)或固态硬盘(Solid-State Drive,SSD);或者上述种类的存储器的组合,并向处理器502提供指令和数据。
上述处理器502可以为ASIC、DSP、DSPD、PLD、FPGA、CPU、控制器、微控制器、微处理器中的至少一种。可以理解地,对于不同的增强现实云平台,用于实现上述处理器功能的电子器件还可以为其它,本申请实施例不作具体限定。
在一些实施例中,本申请实施例提供的装置具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述,为了简洁, 这里不再赘述
上文对各个实施例的描述倾向于强调各个实施例之间的不同之处,其相同或相似之处可以互相参考,为了简洁,本文不再赘述
本申请所提供的各方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本申请所提供的各产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。
本申请所提供的各方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,这些均属于本申请的保护之内。
工业实用性
本申请实施例提出了一种图像处理及神经网络训练方法、装置、电子设备和计算机存储介质,该图像处理方法包括:获取待处理图像;基于所述待处理图像的待追踪目标上当前的像素点,确定出所述待追踪目标上至少一个待选的像素点;基于所述当前的像素点与所述至少一个待选的像素点,结合预设的待追踪目标的真值,得到所述至少一个待选的像素点的评价值;根据所述至少一个待选的像素点的评价值,对所述当前的像素点进行追踪,得到当前的像素点的下一像素点。如此,本申请实施例中,针对待追踪目标,可以根据待选的像素点的评价值,从当前像素点确定出下一个像素点,即,能够准确地实现对待追踪目标的像素点追踪和提取。

Claims (25)

  1. 一种图像处理方法,包括:
    获取待处理图像;
    基于所述待处理图像的待追踪目标上当前的像素点,确定出所述待追踪目标上至少一个待选的像素点;
    基于所述当前的像素点与所述至少一个待选的像素点,结合预设的待追踪目标的真值,得到所述至少一个待选的像素点的评价值;
    根据所述至少一个待选的像素点的评价值,对所述当前的像素点进行追踪,得到所述当前的像素点的下一像素点。
  2. 根据权利要求1所述的图像处理方法,其中,所述基于所述待处理图像的待追踪目标上当前的像素点,确定出所述待追踪目标上至少一个待选的像素点之前,还包括:
    判断所述当前的素点是否位于所述待追踪目标上多个分支之间的交叉点,若是,则选择所述多个分支中的一个支路,从选择的所述支路上的像素中选择所述待选的像素点。
  3. 根据权利要求2所述的图像处理方法,其中,所述选择所述多个分支中的一个支路,包括:
    基于所述当前的像素点和所述多个支路的像素点,结合所述预设的待追踪目标的真值,得到所述多个支中每个支路的评价值;
    根据所述多个支路中每个支路的评价值,从所述多个支路中选择一个支路。
  4. 根据权利要求3所述的图像处理方法,其中,所述根据所述多个支中每个支路的评价值,从所述多个支路中选择一个支路,包括:
    在所述多个支路中,选择评价值最高的一个支路。
  5. 根据权利要求2至4任一项所述的图像处理方法,其中,还包括:
    响应于对选择的支路的像素点进行追踪,且确定满足预设的支路追踪停止条件的情况,针对未完成像素点追踪的交叉点,重新选择一个未进行像素点追踪的支路,对选择的支路进行像素点追踪;所述未完成像素点追踪的交叉点具有未进行像素点追踪的支路;
    响应于不存在未完成像素点追踪的交叉点的情况,确定各个交叉点的各个支路的像素点追踪完成。
  6. 根据权利要求5所述的图像处理方法,其中,所述重新选择一个未进行像素点追踪的支路,包括:
    基于所述未完成像素点追踪的交叉点和所述交叉点的各个未进行像素点追踪的支路的像素点,结合预设的待追踪目标的真值,得到所述各个未进行像素点追踪的支路的评价值;
    根据所述各个未进行像素点追踪的支路的评价值,从所述各个未进行像素点追踪的支路中选择一个支路。
  7. 根据权利要求6所述的图像处理方法,其中,所述根据所述各个未进行像素点追踪的支路的评价值,从所述各个未进行像素点追踪的支路中选择一个支路,包括:
    在所述各个未进行像素点追踪的支路中,选择评价值最高的一个支路。
  8. 根据权利要求5至7任一项所述的图像处理方法,其中,所述预设的支路追踪停止条件包括以下至少之一:
    追踪到的下一个像素点处于预先确定的待追踪目标的末端;
    追踪到的下一个像素点的空间熵值大于预设空间熵值;
    连续N次得到的追踪路线夹角大于设定角度阈值,每次得到追踪路线夹角表示相邻 两次得到的追踪路线的夹角,每次得到的追踪路线表示相邻两次追踪到的像素点之间的连线;N为大于或等于2的整数。
  9. 根据权利要求1-8中任意一项所述的图像处理方法,其中,所述根据所述至少一个待选的像素点的评价值,对所述当前的像素点进行追踪,得到所述当前的像素点的下一像素点,包括:
    从所述至少一个待选的像素点,选择评价值最高的像素点;将选择的所述评价值最高的像素点确定为所述当前像素点的下一个像素点。
  10. 根据权利要求1至9任一项所述的图像处理方法,其中,所述待追踪目标为血管树。
  11. 一种神经网络训练方法,包括:
    获取样本图像;
    将所述样本图像输入至初始神经网络,利用所述初始神经网络执行以下步骤:基于所述样本图像的待追踪目标上当前的像素点,确定出所述待追踪目标上至少一个待选的像素点;基于所述当前的像素点与所述至少一个待选的像素点,结合预设的待追踪目标的真值,得到所述至少一个待选的像素点的评价值;根据所述至少一个待选的像素点的评价值,对所述当前的像素点进行追踪,得到所述当前的像素点的下一像素点;
    根据追踪得到的各个像素点和预设的待追踪目标的真值,调整所述初始神经网络的网络参数值,直至基于网络参数值调整后的初始神经网络得到的各个像素点满足预设的精度需求。
  12. 一种图像处理装置,包括:第一获取模块和第一处理模块,其中,
    第一获取模块,配置为获取待处理图像;
    第一处理模块,配置为基于所述待处理图像的待追踪目标上当前的像素点,确定出所述待追踪目标上至少一个待选的像素点;基于所述当前的像素点与所述至少一个待选的像素点,结合预设的待追踪目标真值,得到所述至少一个待选的像素点的评价值;根据所述至少一个待选的像素点的评价值,对所述当前的像素点进行追踪,得到所述当前的像素点的下一像素点。
  13. 根据权利要求12所述的装置,其中,所述第一处理模块,还配置为:
    在基于所述待处理图像的待追踪目标上当前的像素点,确定出所述待追踪目标上至少一个待选的像素点之前,判断所述当前的像素点是否位于所述待追踪目标上多个分支之间的交叉点,若是,则选择所述多个分支中的一个支路,从选择的所述支路上的像素中选择所述待选的像素点。
  14. 根据权利要求13所述的装置,其中,所述第一处理模块,配置为基于所述当前的像素点和所述多个支路的像素点,结合所述预设的待追踪目标的真值,得到所述多个支路中每个支路的评价值;根据所述多个支中每个支路的评价值,从所述多个支路中选择一个支路。
  15. 根据权利要求14所述的装置,其中,所述第一处理模块,配置为在所述多个支路中,选择评价值最高的一个支路。
  16. 根据权利要求13至15任一项所述的装置,其中,所述第一处理模块,还配置为:
    响应于对选择的支路的像素点进行追踪,且确定满足预设的支路追踪停止条件的情况,针对未完成像素点追踪的交叉点,重新选择一个未进行像素点追踪的支路;对选择的支路进行像素点追踪;所述未完成像素点追踪的交叉点具有未进行像素点追踪的支路;
    响应于不存在未完成像素点追踪的交叉点的情况,确定各个交叉点的各个支路的像素点追踪完成。
  17. 根据权利要求16所述的装置,其中,所述第一处理模块,配置为基于所述未完 成像素点追踪的交叉点和所述交叉点的各个未进行像素点追踪的支路的像素点,结合预设的待追踪目标的真值,得到所述各个未进行像素点追踪的支路的评价值;根据所述各个未进行像素点追踪的支路的评价值,从所述各个未进行像素点追踪的支路中选择一个支路。
  18. 根据权利要求17所述的装置,其中,所述第一处理模块,配置为在所述各个未进行像素点追踪的支路中,选择评价值最高的一个支路。
  19. 根据权利要求16至18任一项所述的装置,其中,所述预设的支路追踪停止条件包括以下至少之一:
    追踪到的下一个像素点处于预先确定的待追踪目标的末端;
    追踪到的下一个像素点的空间熵值大于预设空间熵值;
    连续N次得到的追踪路线夹角大于设定角度阈值,每次得到追踪路线夹角表示相邻两次得到的追踪路线的夹角,每次得到的追踪路线表示相邻两次追踪到的像素点之间的连线;N为大于或等于2的整数。
  20. 根据权利要求12至19任一项所述的装置,其中,所述第一处理模块,配置为从所述至少一个待选的像素点,选择评价值最高的像素点;将选择的所述评价值最高的像素点确定为所述当前像素点的下一个像素点。
  21. 根据权利要求12至20任一项所述的装置,其中,所述待追踪目标为血管树。
  22. 一种神经网络训练装置,包括:第二获取模块、第二处理模块、调整模块和第三处理模块,其中,
    第二获取模块,配置为获取样本图像;
    第二处理模块,配置为将所述样本图像输入至初始神经网络,利用所述初始神经网络执行以下步骤:基于所述样本图像的待追踪目标上当前的像素点,确定出所述待追踪目标上至少一个待选的像素点;基于所述当前的像素点与所述至少一个待选的像素点,结合预设的待追踪目标的真值,得到所述至少一个待选的像素点的评价值;根据所述至少一个待选的像素点的评价值,对所述当前的像素点进行追踪,得到所述当前的像素点的下一像素点;
    调整模块,配置为根据追踪得到的各个像素点和预设的待追踪目标的真值,调整所述初始神经网络的网络参数值;
    第三处理模块,配置为重复执行上述获取所述样本图像、利用所述初始神经网络对所述样本图像进行处理、以及调整所述初始神经网络的网络参数值的步骤,直至基于网络参数值调整后的初始神经网络得到的各个像素点满足预设的精度需求,得到训练完成的神经网络。
  23. 一种电子设备,包括处理器和配置为存储能够在处理器上运行的计算机程序的存储器;其中,
    所述处理器配置为运行所述计算机程序时,执行权利要求1至10任一项所述的图像处理方法或权利要求11所述的神经网络训练方法。
  24. 一种计算机存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现权利要求1至10任一项所述的图像处理方法或权利要求11所述的神经网络训练方法。
  25. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至10任一项所述的图像处理方法或权利要求11所述的神经网络训练方法。
PCT/CN2020/103635 2019-10-31 2020-07-22 图像处理及神经网络训练方法、装置、设备、介质和程序 WO2021082544A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021539385A JP2022516196A (ja) 2019-10-31 2020-07-22 画像処理及びニューラルネットワーク訓練方法、装置、機器、媒体並びにプログラム
US17/723,580 US20220237806A1 (en) 2019-10-31 2022-04-19 Image processing and neural network training method, electronic equipment, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911050567.9 2019-10-31
CN201911050567.9A CN110796653B (zh) 2019-10-31 2019-10-31 图像处理及神经网络训练方法、装置、设备和介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/723,580 Continuation US20220237806A1 (en) 2019-10-31 2022-04-19 Image processing and neural network training method, electronic equipment, and storage medium

Publications (1)

Publication Number Publication Date
WO2021082544A1 true WO2021082544A1 (zh) 2021-05-06

Family

ID=69442281

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/103635 WO2021082544A1 (zh) 2019-10-31 2020-07-22 图像处理及神经网络训练方法、装置、设备、介质和程序

Country Status (5)

Country Link
US (1) US20220237806A1 (zh)
JP (1) JP2022516196A (zh)
CN (1) CN110796653B (zh)
TW (1) TWI772932B (zh)
WO (1) WO2021082544A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110796653B (zh) * 2019-10-31 2022-08-30 北京市商汤科技开发有限公司 图像处理及神经网络训练方法、装置、设备和介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150046139A1 (en) * 2013-08-09 2015-02-12 Fujitsu Limited Apparatus and method for generating vascular data
CN107203741A (zh) * 2017-05-03 2017-09-26 上海联影医疗科技有限公司 血管提取方法、装置及其系统
CN109035194A (zh) * 2018-02-22 2018-12-18 青岛海信医疗设备股份有限公司 一种血管提取方法及装置
CN110796653A (zh) * 2019-10-31 2020-02-14 北京市商汤科技开发有限公司 图像处理及神经网络训练方法、装置、设备和介质

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009187224A (ja) * 2008-02-05 2009-08-20 Fuji Xerox Co Ltd 情報処理装置及び情報処理プログラム
US9317761B2 (en) * 2010-12-09 2016-04-19 Nanyang Technological University Method and an apparatus for determining vein patterns from a colour image
JP5391229B2 (ja) * 2011-04-27 2014-01-15 富士フイルム株式会社 木構造抽出装置および方法ならびにプログラム
JP6036224B2 (ja) * 2012-11-29 2016-11-30 日本電気株式会社 順序制御システム、順序制御方法、順序制御プログラム、及び、メッセージ管理システム
CN103284760B (zh) * 2013-06-08 2015-04-08 哈尔滨工程大学 一种基于导管路径的扩展超声血管成像方法及装置
US9521988B2 (en) * 2015-02-17 2016-12-20 Siemens Healthcare Gmbh Vessel tree tracking in angiography videos
TWI572186B (zh) * 2015-12-04 2017-02-21 國立雲林科技大學 內視鏡影像鏡面反射去除之自適應修補方法
CN106340021B (zh) * 2016-08-18 2020-11-27 上海联影医疗科技股份有限公司 血管提取方法
CN106296698B (zh) * 2016-08-15 2019-03-29 成都通甲优博科技有限责任公司 一种基于立体视觉的闪电三维定位方法
CN116051580A (zh) * 2017-05-09 2023-05-02 上海联影医疗科技股份有限公司 一种血管分离方法及系统
CN107563983B (zh) * 2017-09-28 2020-09-01 上海联影医疗科技有限公司 图像处理方法以及医学成像设备
CN109360209B (zh) * 2018-09-30 2020-04-14 语坤(北京)网络科技有限公司 一种冠状血管分割方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150046139A1 (en) * 2013-08-09 2015-02-12 Fujitsu Limited Apparatus and method for generating vascular data
CN107203741A (zh) * 2017-05-03 2017-09-26 上海联影医疗科技有限公司 血管提取方法、装置及其系统
CN109035194A (zh) * 2018-02-22 2018-12-18 青岛海信医疗设备股份有限公司 一种血管提取方法及装置
CN110796653A (zh) * 2019-10-31 2020-02-14 北京市商汤科技开发有限公司 图像处理及神经网络训练方法、装置、设备和介质

Also Published As

Publication number Publication date
JP2022516196A (ja) 2022-02-24
US20220237806A1 (en) 2022-07-28
CN110796653A (zh) 2020-02-14
CN110796653B (zh) 2022-08-30
TW202119357A (zh) 2021-05-16
TWI772932B (zh) 2022-08-01

Similar Documents

Publication Publication Date Title
TWI737006B (zh) 一種跨模態訊息檢索方法、裝置和儲存介質
CN111164601A (zh) 情感识别方法、智能装置和计算机可读存储介质
WO2017177661A1 (zh) 基于卷积神经网络的视频检索方法及系统
US10134165B2 (en) Image distractor detection and processing
US11640518B2 (en) Method and apparatus for training a neural network using modality signals of different domains
WO2020099957A1 (en) Semantic segmentation with soft cross-entropy loss
CN109492128B (zh) 用于生成模型的方法和装置
US10732783B2 (en) Identifying image comments from similar images
US9773159B2 (en) Method and apparatus for extracting image feature
JP7286013B2 (ja) ビデオコンテンツ認識方法、装置、プログラム及びコンピュータデバイス
US11548146B2 (en) Machine learning and object searching method and device
KR102262264B1 (ko) 이미지 검색을 위한 다중 글로벌 디스크립터를 조합하는 프레임워크
CN113518256A (zh) 视频处理方法、装置、电子设备及计算机可读存储介质
US10902638B2 (en) Method and system for detecting pose of a subject in real-time
CN110009662B (zh) 人脸跟踪的方法、装置、电子设备及计算机可读存储介质
TW201633181A (zh) 用於經非同步脈衝調制的取樣信號的事件驅動型時間迴旋
US20160364088A1 (en) Dynamically changing a 3d object into an interactive 3d menu
US11954755B2 (en) Image processing device and operation method thereof
JP7085600B2 (ja) 画像間の類似度を利用した類似領域強調方法およびシステム
US20230260324A1 (en) Capturing digital images utilizing a machine learning model trained to determine subtle pose differentiations
CN108702463A (zh) 一种图像处理方法、装置以及终端
WO2021082544A1 (zh) 图像处理及神经网络训练方法、装置、设备、介质和程序
CN114898177B (zh) 缺陷图像生成方法、模型训练方法、设备、介质及产品
US9443168B1 (en) Object detection approach using an ensemble strong classifier
CN109977905B (zh) 用于处理眼底图像的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20881109

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021539385

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20881109

Country of ref document: EP

Kind code of ref document: A1