CN109598751B - Medical image picture processing method, device and apparatus - Google Patents

Medical image picture processing method, device and apparatus Download PDF

Info

Publication number
CN109598751B
CN109598751B CN201811530621.5A CN201811530621A CN109598751B CN 109598751 B CN109598751 B CN 109598751B CN 201811530621 A CN201811530621 A CN 201811530621A CN 109598751 B CN109598751 B CN 109598751B
Authority
CN
China
Prior art keywords
image picture
characteristic
feature
region
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811530621.5A
Other languages
Chinese (zh)
Other versions
CN109598751A (en
Inventor
宋凌
冯雪
杨光明
秦岚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qianglian Zhichuang Suzhou Medical Technology Co ltd
Original Assignee
Qianglian Zhichuang Suzhou Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qianglian Zhichuang Suzhou Medical Technology Co ltd filed Critical Qianglian Zhichuang Suzhou Medical Technology Co ltd
Priority to CN201811530621.5A priority Critical patent/CN109598751B/en
Publication of CN109598751A publication Critical patent/CN109598751A/en
Application granted granted Critical
Publication of CN109598751B publication Critical patent/CN109598751B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The application discloses a method, a device and equipment for processing medical image pictures. And extracting characteristic areas from the first image picture and the second image picture by determining the first image picture and the second image picture of the same human body part, matching the characteristic areas of the first image picture and the characteristic areas of the second image picture according to preset characteristic conditions, and carrying out affine transformation on the first image picture according to the matched characteristic areas of the first image picture and the second image picture. According to the technical scheme, the image pictures containing contrast and not containing contrast are matched according to the characteristic area, the characteristic area can contain more characteristic information, the image pictures are matched more accurately based on the characteristic area, the first image picture is subjected to accurate affine transformation according to the matching result, and the size and the image content distribution position of the transformed first image picture and the transformed second image picture are approximately consistent.

Description

Medical image picture processing method, device and apparatus
Technical Field
The present disclosure relates to the field of computer medical image processing, and in particular, to a method, an apparatus, and a device for medical image processing.
Background
In the field of medical images, it is often necessary to acquire internal tissue images of a human body or a part of a human body in a non-invasive manner and perform processing analysis. In this process, it is necessary to obtain a picture of a specific organ or tissue subjected to a visualization mark and an image picture of a specific organ or tissue not subjected to a visualization mark, match the obtained two image pictures, and determine the distribution position of the organ or tissue. However, due to the differences of environmental factors and human factors during the shooting of the two image pictures, the two shot image pictures have differences of size or picture content distribution position and the like.
Disclosure of Invention
The embodiment of the specification provides a method, a device and equipment for processing medical image pictures, which are used for solving the problem of inaccurate distribution state of a target object determined in the prior art.
The application provides a medical image picture processing method, which comprises the following steps:
determining a first image picture and a second image picture of the same human body part, wherein one of the first image picture and the second image picture is an image picture when a target object in the human body part is not subjected to development marking, and the other of the first image picture and the second image picture is an image picture when the target object in the human body part is subjected to development marking;
Extracting a characteristic region from the first image picture and the second image picture;
matching the characteristic region of the first image picture with the characteristic region of the second image picture according to a preset characteristic condition;
and carrying out affine transformation on the first image picture according to the matched characteristic region between the first image picture and the second image picture to obtain the same distribution position of the matched characteristic region between the transformed first image picture and the second image picture.
The application provides a device of medical image picture processing, include:
the image determining module is used for determining a first image picture and a second image picture of the same human body part, wherein one image picture of the first image picture and the second image picture is an image picture when a target object in the human body part is not subjected to development marking, and the other image picture of the first image picture and the second image picture is an image picture when the target object in the human body part is subjected to development marking;
the feature extraction module is used for extracting feature areas from the first image picture and the second image picture;
the matching module is used for matching the characteristic region of the first image picture with the characteristic region of the second image picture according to preset characteristic conditions;
And the transformation module carries out affine transformation on the first image picture according to the matched characteristic region between the first image picture and the second image picture, so as to obtain the same distribution position of the matched characteristic region between the transformed first image picture and the second image picture.
The present application also provides an electronic device comprising at least one processor and a memory, the memory storing a program and configured to be executed by at least one processor to:
determining a first image picture and a second image picture of the same human body part, wherein one of the first image picture and the second image picture is an image picture when a target object in the human body part is not subjected to development marking, and the other of the first image picture and the second image picture is an image picture when the target object in the human body part is subjected to development marking;
extracting a characteristic region from the first image picture and the second image picture;
matching the characteristic region of the first image picture with the characteristic region of the second image picture according to a preset characteristic condition;
and carrying out affine transformation on the first image picture according to the matched characteristic region between the first image picture and the second image picture to obtain the same distribution position of the matched characteristic region between the transformed first image picture and the second image picture.
The present application also provides a computer readable storage medium including a program for use with an electronic device, the program executable by a processor to perform the steps of: determining a first image picture and a second image picture of the same human body part, wherein one of the first image picture and the second image picture is an image picture when a target object in the human body part is not subjected to development marking, and the other of the first image picture and the second image picture is an image picture when the target object in the human body part is subjected to development marking;
extracting a characteristic region from the first image picture and the second image picture;
matching the characteristic region of the first image picture with the characteristic region of the second image picture according to a preset characteristic condition;
and carrying out affine transformation on the first image picture according to the matched characteristic region between the first image picture and the second image picture to obtain the same distribution position of the matched characteristic region between the transformed first image picture and the second image picture.
The above-mentioned at least one technical scheme that this application embodiment adopted can reach following beneficial effect:
Extracting characteristic areas from the first image picture and the second image picture, wherein one of the first image picture and the second image picture is an image picture when the target object in the human body part is not subjected to development marking, the other of the first image picture and the second image picture is an image picture when the target object in the human body part is subjected to development marking, the characteristic areas of the first image picture and the characteristic areas of the second image picture are matched according to preset characteristic conditions, affine transformation is carried out on the first image picture according to the matched characteristic areas of the first image picture and the second image picture, and distribution positions of the matched characteristic areas of the transformed first image picture and the second image picture are the same. In the technical solution described in the embodiments of the present disclosure, the feature area may include more feature information, and the accuracy of the matching result is higher based on the feature area matching the first image picture and the second image picture, which may enable the first image picture to be subjected to accurate affine transformation according to the matching result, so as to ensure that the size and the image content distribution position of the transformed first image picture and second image picture are approximately consistent.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is a flowchart of a method for processing a medical image according to an embodiment of the present disclosure;
fig. 2 is an effect diagram of a method for processing a medical image according to an embodiment of the present disclosure;
fig. 3 is an effect diagram of a method for processing a medical image according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a method for processing a medical image according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a method for processing a medical image according to an embodiment of the present disclosure;
fig. 6 is a block diagram of an apparatus for medical image processing according to an embodiment of the present disclosure.
Detailed Description
Analysis of the prior art finds that before two obtained image pictures are matched, the difference of the two image pictures in size or picture content distribution position is determined according to the gray scale correlation of the pixel points in the two image pictures and the difference of the distribution positions of the pixel points in the two image pictures. The processing mode is to match based on the gray scale correlation of the pixel points in the image, and the characteristic value reflected by the gray scale value of the pixel points is single, so that the accuracy of a matching result may be reduced.
According to the embodiment of the specification, a first image picture and a second image picture of the same human body part are determined, wherein one of the first image picture and the second image picture is an image picture when a target object in the human body part is not subjected to development marking, the other of the first image picture and the second image picture is an image picture when the target object in the human body part is subjected to development marking, characteristic areas are extracted from the first image picture and the second image picture, the characteristic areas of the first image picture and the characteristic areas of the second image picture are matched according to preset characteristic conditions, affine transformation is carried out on the first image picture according to the characteristic areas matched between the first image picture and the second image picture, and distribution positions of the characteristic areas matched between the first image picture and the second image picture after transformation are the same. In the technical solution described in the embodiments of the present disclosure, the feature area may include more feature information, and the accuracy of the matching result is higher based on the feature area matching the first image picture and the second image picture, which may enable the first image picture to be subjected to accurate affine transformation according to the matching result, so as to ensure that the size and the image content distribution position of the transformed first image picture and second image picture are approximately consistent.
For the purposes, technical solutions and advantages of the present application, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for processing a medical image according to an embodiment of the present disclosure, where the method is as follows.
S101: and determining a first image picture and a second image picture of the same human body part.
In this embodiment of the present disclosure, one of the first image and the second image is an image when the target object in the human body part is not subjected to the development marking, and the other of the first image and the second image is an image when the target object in the human body part is subjected to the development marking. The distribution state of the target object is determined by comparing the difference between the first image and the second image. Wherein, the human body parts can be brain, heart, lung and the like, and are not particularly limited herein.
In the embodiment of the present disclosure, the determining the first image and the second image of the same human body part by using the cerebral blood vessel as the target object for visualization may include:
a first image picture and a second image picture in the brain of the human are determined.
S102: and extracting a characteristic region from the first image picture and the second image picture.
In the embodiment of the present specification, the feature region may reflect information of the image picture. Therefore, by analyzing the characteristic regions of the first image picture and the second image picture, the difference between the first image picture and the second image picture can be roughly determined.
In this embodiment of the present disclosure, a preset gray scale condition may be determined according to a gray scale value of a point in an image picture, and a feature region may be determined according to the preset gray scale condition. For example, the gray level may be determined according to the maximum value and the minimum value of the gray levels in the image picture, and the region with the average value of the gray levels in the screening region within the preset gray level is used as the feature region. The size of the feature region may be set according to the size of the image picture, so that the difference between the two images may be approximately determined by determining the feature region corresponding to the gray level in the first image picture and the second image picture.
In an embodiment of the present disclosure, extracting a feature region from the first image picture and the second image picture may include:
screening characteristic points meeting preset gray scale conditions from the first image picture and the second image picture;
and determining a characteristic region containing the characteristic points according to the characteristic points.
In the embodiment of the present disclosure, the preset gray scale condition may be to use each gray scale extremum point in the first image picture and the second image picture as the feature point, where the gray scale extremum point may be at least one of a gray scale maximum value point and a gray scale minimum value point.
In this way, screening feature points meeting a preset gray scale condition from the first image picture and the second image picture includes:
selecting gray extreme points from the first image picture and the second image picture;
and determining characteristic points meeting preset gray scale conditions according to the gray scale extreme points.
In the embodiment of the present disclosure, the preset gray-scale condition may further be that a point in the first image picture and the second image picture, which each falls within a uniform preset gray-scale value range, is used as the feature point. Therefore, the preset gradation conditions are not particularly limited, and may be set as needed.
Fig. 2 is an effect diagram of a method for processing a medical image picture according to an embodiment of the present disclosure, which shows feature points in a picture in which a target object in a human body is not subjected to development marking.
Fig. 3 is an effect diagram of a method for processing a medical image picture according to an embodiment of the present disclosure, which shows feature points in a picture in which a target object in a human body has been subjected to a development mark.
The m point in fig. 2 and the n point in fig. 3 are detected feature points. Square region Q1 represents a feature region including m, and square region Q2 represents a feature region including n. The feature regions may be square, hexagonal or other shapes, and are not particularly limited herein.
S103: and matching the characteristic region of the first image picture with the characteristic region of the second image picture according to a preset characteristic condition.
Because the characteristic region can approximately reflect the characteristic information of the image pictures, the characteristic region in the first image picture and the characteristic region in the second image picture are matched, the distribution position of the characteristic region with the similarity meeting the threshold value in the first image picture and the second image picture in the image pictures to which the characteristic region belongs can be determined, and the difference between the first image picture and the second image picture is further determined.
In the embodiment of the present specification, the similarity between the feature areas in the first image picture and the second image picture is represented by the similarity of the gradation variation values within the feature areas. Thus, the preset feature condition may be that the similarity of the gradation variation values reaches a threshold value. In this way, matching the feature region of the first image picture with the feature region of the second image picture according to a preset feature condition includes:
matching the gray level change value of the characteristic region of the first image picture and the gray level change value of the characteristic region of the second image picture according to preset characteristic conditions;
and determining the matched characteristic areas in the first image picture and the second image picture according to the matching result.
In the embodiment of the present specification, the gradation change value of the feature region may include a gradation change value of the feature region in at least one target direction. Therefore, before matching the gray scale variation value of the feature area of the first image picture and the gray scale variation value of the feature area of the second image picture according to the preset feature condition, the method may further include:
and determining the target direction which takes the characteristic points as starting points and meets the preset gray level change condition for each characteristic point.
The target direction is a direction determined from information of gray level change within the feature region, and thus, the target direction may be changed in synchronization with the rotation of the feature region.
Specifically, determining the target direction in which the characteristic point is set as a starting point and the preset gray level change condition is satisfied may be counting the gray level change in a region in a certain direction of the characteristic point in the characteristic region, if the gray level change satisfies the preset gray level change condition, the direction is set as the target direction, for example, the variance of the gray level value of the pixel point in a sector region in which the characteristic point is set as a vertex may be counted, the variances of the gray level values of the sector regions selected in different directions are compared, the sector region in which the variance of the gray level value is minimum is determined, and the direction when the sector region is selected is set as the target direction. The target direction may be determined in other shapes, such as diamond, rectangle, hexagon, but not limited thereto.
In this case, matching the gray scale variation value of the feature region of the first image picture and the gray scale variation value of the feature region of the second image picture according to a preset feature condition may include:
processing each characteristic region, determining at least one gray characteristic vector of the characteristic region along a target direction corresponding to the characteristic point contained in the characteristic region, and determining at least one gray characteristic vector of the characteristic region along a direction perpendicular to the target direction;
And matching each gray scale feature vector of the feature region in the first image picture with each gray scale feature vector of the feature region of the second image picture according to the similarity of the gray scale feature vectors.
In the embodiment of the present disclosure, processing each feature region, determining at least one gray feature vector of the feature region along a target direction corresponding to a feature point included in the feature region, and determining at least one gray feature vector of the feature region along a direction perpendicular to the target direction may include:
dividing each characteristic region according to the target direction and the vertical direction of the target direction to obtain a subarea of each characteristic region;
and determining at least one gray feature vector of the subarea along the target direction of each feature area to which the subarea belongs, and determining at least one gray feature vector of the subarea along the vertical direction perpendicular to each feature area to which the subarea belongs.
In this way, if there is a deviation caused by rotation in the distribution of the feature regions reflecting the same gradation change in the first image picture and the second image picture in the image pictures to which they belong, there is also a deviation caused by such rotation in the target direction of the feature region and the position of the pixel point in the first image picture with respect to the target direction of the corresponding feature region and the position of the pixel point in the second image picture, by dividing the sub-region in the perpendicular direction to the target direction, the gradation feature vector of the sub-region is determined, and even if there is a deviation caused by rotation between the two feature regions having the same gradation change in the first image picture and the second image picture, the determined gradation feature vector is still the same, and by such gradation feature vector matching the feature regions in the first image picture and the second image picture is less affected by the rotation deviation of the feature regions, so that the matching result is more accurate.
Fig. 4 is a schematic diagram of a method for processing a medical image according to an embodiment of the present disclosure, which illustrates a schematic diagram of feature vectors in determining feature regions.
In fig. 4, an o-point is a feature point in the first image picture, f is a target direction determined according to the o-point, and the feature region may be divided into 16 sub-regions with the target direction f being a horizontal direction and a direction perpendicular to the target direction f being a vertical direction, where a is one of the sub-regions. In this way, at least one gray feature vector in the f direction and at least one gray feature vector perpendicular to the f direction within the sub-region a can be determined. The subareas may be square, rectangular or other shapes, and are not particularly limited herein.
Wherein dividing the sub-region with the target direction and the vertical direction of the target direction as the vertical direction is one way to determine the gray feature vector of the feature region. For each feature region, a coordinate system may be established according to the feature points in the feature region and the target direction, and the vertical direction of the target direction, and the gray scale change of the region in the specific coordinate range may be directly counted to determine the gray scale feature vector of the feature region.
In this embodiment of the present application, according to the matching of the similarity of the gray feature vectors to each gray feature vector of the feature region in the first image picture and each gray feature vector of the feature region in the second image picture, the gray feature vectors of all feature regions in the first image picture and the gray feature vectors of all feature regions in the second image picture may be matched by an exhaustive method, the similarity between the gray feature vectors of the matched feature regions is determined, if the similarity meets a threshold, a matching manner corresponding to the similarity is taken as a matching manner for registration of the feature regions of the first image picture and the second image picture, and at this time, gray information reflected by the matched feature regions in the first image picture and the second image picture is approximately the same.
S104: and carrying out affine transformation on the first image picture according to the matched characteristic region between the first image picture and the second image picture to obtain the same distribution position of the matched characteristic region between the transformed first image picture and the second image picture.
In the embodiment of the present disclosure, the matching relationship of the feature points may be determined according to the matching relationship of the first image picture and the feature region of the second image picture.
In this embodiment of the present disclosure, when the first image picture and the second image picture have offset relationships such as translation, scaling, rotation, and the like, the feature points included in the matched feature region between the first image picture and the second image picture also have corresponding positional offset relationships. Thus, affine transformation is performed on the first image picture according to the matched characteristic region between the first image picture and the second image picture, which comprises the following steps:
determining transformation parameters according to the position deviation of feature points contained in the matched feature areas between the first image picture and the second image picture;
and transforming the first image picture according to the transformation parameters.
In an embodiment of the present disclosure, determining the transformation parameter according to the position deviation of the feature point in the matched feature region between the first image picture and the second image picture may include:
and determining an affine matrix according to the distribution positions of the feature points in the matched feature areas between the first image picture and the second image picture.
Fig. 5 is a schematic diagram of a method for processing a medical image according to an embodiment of the present disclosure.
Fig. 5 shows a schematic diagram of transforming the first image picture according to the feature points of the matched feature regions. As shown in fig. 5, P1 represents a first video picture, P2 represents a second video picture, a, b, c represent feature points in P1, a ', b ', c ' represent feature points in P2, a→a ', b→b ', c→c ' represent matching relationships between feature points, and in fig. 5, the relative distances between feature points a ', e ', f ' are larger than the relative distances between d, e, f, indicating that P2 is changed from the P1 image size. An affine matrix can be determined by the distribution positions of d ', e', f 'in P1 and the distribution positions of d, e, f in P2, according to which affine matrix P1 is transformed such that the distribution positions of d, e, f in transformed P1 are the same as the distribution positions of d', e ', f' in P4.
Affine transformation is performed on the first image picture according to the matched characteristic region between the first image picture and the second image picture, and the affine transformation can be regarded as transformation such as translation, rotation, scaling and the like of the first image picture.
Because the feature region can contain more feature information, the first image picture and the second image picture are matched based on the feature region, the accuracy of the matching result is higher, the precise affine transformation of the first image picture according to the matching result can be realized, and the transformed first image picture and the transformed second image picture are ensured to be approximately consistent in the aspects of size and image content distribution position.
In the embodiment of the present disclosure, silhouette may be performed on the transformed first image picture and the transformed second image picture to obtain a contrast map including the target object. In this way, affine transformation is performed on the first image picture according to the matched characteristic region between the first image picture and the second image picture, so as to obtain that distribution positions of the matched characteristic region between the transformed first image picture and the second image picture are the same, which may include:
and subtracting the transformed first image picture from the second image picture to obtain an image picture containing the target object.
Because the feature region can contain more feature information, the first image picture and the second image picture are matched based on the feature region, the accuracy of the matching result is higher, the precise affine transformation of the first image picture according to the matching result can be realized, and the transformed first image picture and the transformed second image picture are ensured to be approximately consistent in the aspects of size and image content distribution position. Therefore, by subtracting the first image picture and the second image picture transformed in this way, the obtained image picture containing the target object can more accurately reflect the distribution of the target object.
Fig. 6 is a block diagram of an apparatus for medical image processing according to an embodiment of the present disclosure.
An embodiment of the present disclosure provides a device for processing medical image pictures, which may include:
the image determining module 601 determines a first image picture and a second image picture of the same human body part, wherein one image picture of the first image picture and the second image picture is an image picture when the target object in the human body part is not subjected to development marking, and the other image picture of the first image picture and the second image picture is an image picture when the target object in the human body part is subjected to development marking;
the feature extraction module 602 extracts feature regions from the first image picture and the second image picture;
the matching module 603 matches the feature area of the first image picture with the feature area of the second image picture according to a preset feature condition;
the transformation module 604 performs affine transformation on the first image according to the matched feature areas between the first image and the second image, so as to obtain that the distribution positions of the matched feature areas between the transformed first image and the transformed second image are the same.
According to the device, the image pictures containing contrast and not containing contrast are matched according to the characteristic region, and because the characteristic region can contain more characteristic information, the first image picture and the second image picture are matched based on the characteristic region, the accuracy of the matching result is higher, the first image picture is subjected to accurate affine transformation according to the matching result, and the transformed first image picture and the transformed second image picture are ensured to be approximately consistent in size and image content distribution position.
Optionally, extracting a feature region from the first image picture and the second image picture includes:
screening characteristic points meeting preset gray scale conditions from the first image picture and the second image picture;
and determining a characteristic region containing the characteristic points according to the characteristic points.
Optionally, screening feature points meeting a preset gray scale condition from the first image picture and the second image picture includes:
selecting gray extreme points from the first image picture and the second image picture;
and determining characteristic points meeting preset gray scale conditions according to the gray scale extreme points.
Optionally, matching the feature region of the first image picture with the feature region of the second image picture according to a preset feature condition includes:
Matching the gray level change value of the characteristic region of the first image picture and the gray level change value of the characteristic region of the second image picture according to preset characteristic conditions;
and determining the matched characteristic areas in the first image picture and the second image picture according to the matching result.
Optionally, before matching the gray scale variation value of the feature area of the first image picture and the gray scale variation value of the feature area of the second image picture according to a preset feature condition, the method further includes:
for each characteristic point, determining a target direction which takes the characteristic point as a starting point and meets a preset gray level change condition;
matching the gray level change value of the characteristic region of the first image picture and the gray level change value of the characteristic region of the second image picture according to a preset characteristic condition, including:
processing each characteristic region, determining at least one gray characteristic vector of the characteristic region along a target direction corresponding to the characteristic point contained in the characteristic region, and determining at least one gray characteristic vector of the characteristic region along a direction perpendicular to the target direction;
and matching each gray scale feature vector of the feature region in the first image picture with each gray scale feature vector of the feature region of the second image picture according to the similarity of the gray scale feature vectors.
Optionally, the processing each feature area, determining at least one gray feature vector of the feature area along a target direction corresponding to a feature point included in the feature area, and determining at least one gray feature vector of the feature area along a direction perpendicular to the target direction includes:
dividing each characteristic region according to the target direction and the vertical direction of the target direction to obtain a subarea of each characteristic region;
at least one gray feature vector of each subarea along the target direction of each feature area to which each subarea belongs is determined, and at least one gray feature vector of each subarea along the vertical direction perpendicular to each feature area to which each subarea belongs is determined.
Optionally, performing affine transformation on the first image picture according to the matched feature region in the first image picture and the second image picture includes:
determining transformation parameters according to the position deviation of feature points contained in the matched feature areas between the first image picture and the second image picture;
and transforming the first image picture according to the transformation parameters.
Optionally, determining the transformation parameter according to the position deviation of the feature point included in the matched feature region between the first image picture and the second image picture includes:
And determining an affine matrix according to the distribution positions of the feature points contained in the matched feature areas between the first image picture and the second image picture.
Optionally, the method further comprises:
and subtracting the transformed first image picture from the second image picture to obtain an image picture containing the target object.
An embodiment of the present specification provides an electronic device including at least one processor and a memory, the memory storing a program and configured to be executed by at least one processor to:
determining a first image picture and a second image picture of the same human body part, wherein one of the first image picture and the second image picture is an image picture when a target object in the human body part is not subjected to development marking, and the other of the first image picture and the second image picture is an image picture when the target object in the human body part is subjected to development marking;
extracting a characteristic region from the first image picture and the second image picture;
matching the characteristic region of the first image picture with the characteristic region of the second image picture according to a preset characteristic condition;
And carrying out affine transformation on the first image picture according to the matched characteristic region between the first image picture and the second image picture to obtain the same distribution position of the matched characteristic region between the transformed first image picture and the second image picture.
The present embodiments provide a program for use with an electronic device, the program being executable by a processor to perform the steps of:
determining a first image picture and a second image picture of the same human body part, wherein one of the first image picture and the second image picture is an image picture when a target object in the human body part is not subjected to development marking, and the other of the first image picture and the second image picture is an image picture when the target object in the human body part is subjected to development marking;
extracting a characteristic region from the first image picture and the second image picture;
matching the characteristic region of the first image picture with the characteristic region of the second image picture according to a preset characteristic condition;
and carrying out affine transformation on the first image picture according to the matched characteristic region between the first image picture and the second image picture to obtain the same distribution position of the matched characteristic region between the transformed first image picture and the second image picture.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (16)

1. A method for processing medical image pictures, comprising:
determining a first image picture and a second image picture of the same human body part, wherein one of the first image picture and the second image picture is an image picture when a target object in the human body part is not subjected to development marking, and the other of the first image picture and the second image picture is an image picture when the target object in the human body part is subjected to development marking;
Extracting a characteristic region from the first image picture and the second image picture; screening characteristic points meeting preset gray scale conditions from the first image picture and the second image picture; determining a characteristic region containing the characteristic points according to the characteristic points;
matching the characteristic region of the first image picture with the characteristic region of the second image picture according to a preset characteristic condition, including: processing each characteristic region, determining at least one gray characteristic vector of the characteristic region along a target direction corresponding to the characteristic point contained in the characteristic region, and determining at least one gray characteristic vector of the characteristic region along a direction perpendicular to the target direction; matching each gray scale feature vector of a feature region in the first image picture with each gray scale feature vector of a feature region of the second image picture according to the similarity of the gray scale feature vectors, wherein the target direction is determined according to the gray scale change information in the feature region;
and carrying out affine transformation on the first image picture according to the matched characteristic region between the first image picture and the second image picture to obtain the same distribution position of the matched characteristic region between the transformed first image picture and the second image picture.
2. The method of claim 1, wherein screening feature points meeting a preset gray scale condition from the first image picture and the second image picture comprises:
selecting gray extreme points from the first image picture and the second image picture;
and determining characteristic points meeting preset gray scale conditions according to the gray scale extreme points.
3. The method as recited in claim 1, further comprising:
and determining the target direction which takes the characteristic points as starting points and meets the preset gray level change condition for each characteristic point.
4. The method of claim 1, wherein the processing each feature region to determine at least one gray feature vector of the feature region along a target direction corresponding to a feature point included in the feature region, and determining at least one gray feature vector of the feature region along a direction perpendicular to the target direction, comprises:
dividing each characteristic region according to the target direction and the vertical direction of the target direction to obtain a subarea of each characteristic region;
at least one gray feature vector of each subarea along the target direction of each feature area to which each subarea belongs is determined, and at least one gray feature vector of each subarea along the vertical direction perpendicular to each feature area to which each subarea belongs is determined.
5. The method of claim 1, wherein affine transforming the first image picture based on the matching feature regions in the first image picture and the second image picture comprises:
determining transformation parameters according to the position deviation of feature points contained in the matched feature areas between the first image picture and the second image picture;
and transforming the first image picture according to the transformation parameters.
6. The method of claim 5, wherein determining the transformation parameters from the positional deviations of feature points contained in the matched feature region between the first image picture and the second image picture comprises:
and determining an affine matrix according to the distribution positions of the feature points in the matched feature areas between the first image picture and the second image picture.
7. The method of claim 1, wherein the method further comprises:
and subtracting the transformed first image picture from the second image picture to obtain an image picture containing the target object.
8. An apparatus for processing medical image pictures, comprising:
The image determining module is used for determining a first image picture and a second image picture of the same human body part, wherein one image picture of the first image picture and the second image picture is an image picture when a target object in the human body part is not subjected to development marking, and the other image picture of the first image picture and the second image picture is an image picture when the target object in the human body part is subjected to development marking;
the feature extraction module is used for extracting feature areas from the first image picture and the second image picture; screening characteristic points meeting preset gray scale conditions from the first image picture and the second image picture; determining a characteristic region containing the characteristic points according to the characteristic points;
the matching module matches the characteristic region of the first image picture with the characteristic region of the second image picture according to a preset characteristic condition, and comprises the following steps: processing each characteristic region, determining at least one gray characteristic vector of the characteristic region along a target direction corresponding to the characteristic point contained in the characteristic region, and determining at least one gray characteristic vector of the characteristic region along a direction perpendicular to the target direction; matching each gray scale feature vector of a feature region in the first image picture with each gray scale feature vector of a feature region of the second image picture according to the similarity of the gray scale feature vectors, wherein the target direction is determined according to the gray scale change information in the feature region;
And the transformation module carries out affine transformation on the first image picture according to the matched characteristic region between the first image picture and the second image picture, so as to obtain the same distribution position of the matched characteristic region between the transformed first image picture and the second image picture.
9. The apparatus of claim 8, wherein screening feature points from the first image picture and the second image picture that meet a preset gray scale condition comprises:
selecting gray extreme points from the first image picture and the second image picture;
and determining characteristic points meeting preset gray scale conditions according to the gray scale extreme points.
10. The apparatus as recited in claim 8, further comprising:
and determining the target direction which takes the characteristic points as starting points and meets the preset gray level change condition for each characteristic point.
11. The apparatus of claim 8, wherein the processing each feature region to determine at least one gray feature vector of the feature region along a target direction corresponding to a feature point included in the feature region, and determining at least one gray feature vector of the feature region along a direction perpendicular to the target direction, comprises:
Dividing each characteristic region according to the target direction and the vertical direction of the target direction to obtain a subarea of each characteristic region;
at least one gray feature vector of each subarea along the target direction of each feature area to which each subarea belongs is determined, and at least one gray feature vector of each subarea along the vertical direction perpendicular to each feature area to which each subarea belongs is determined.
12. The apparatus of claim 8, wherein affine transforming the first image picture based on the matched feature regions in the first image picture and the second image picture comprises:
determining transformation parameters according to the position deviation of feature points contained in the matched feature areas between the first image picture and the second image picture;
and transforming the first image picture according to the transformation parameters.
13. The apparatus of claim 12, wherein determining the transformation parameters from the positional deviations of feature points contained in the matched feature region between the first image picture and the second image picture comprises:
and determining an affine matrix according to the distribution positions of the feature points contained in the matched feature areas between the first image picture and the second image picture.
14. The apparatus of claim 8, wherein the apparatus further comprises:
and subtracting the transformed first image picture from the second image picture to obtain an image picture containing the target object.
15. An electronic device comprising at least one processor and a memory, the memory storing a program and configured to be executed by at least one processor to:
determining a first image picture and a second image picture of the same human body part, wherein one of the first image picture and the second image picture is an image picture when a target object in the human body part is not subjected to development marking, and the other of the first image picture and the second image picture is an image picture when the target object in the human body part is subjected to development marking;
extracting a characteristic region from the first image picture and the second image picture; screening characteristic points meeting preset gray scale conditions from the first image picture and the second image picture; determining a characteristic region containing the characteristic points according to the characteristic points;
matching the characteristic region of the first image picture with the characteristic region of the second image picture according to a preset characteristic condition, including: processing each characteristic region, determining at least one gray characteristic vector of the characteristic region along a target direction corresponding to the characteristic point contained in the characteristic region, and determining at least one gray characteristic vector of the characteristic region along a direction perpendicular to the target direction; matching each gray scale feature vector of a feature region in the first image picture with each gray scale feature vector of a feature region of the second image picture according to the similarity of the gray scale feature vectors, wherein the target direction is determined according to the gray scale change information in the feature region;
And carrying out affine transformation on the first image picture according to the matched characteristic region between the first image picture and the second image picture to obtain the same distribution position of the matched characteristic region between the transformed first image picture and the second image picture.
16. A computer readable storage medium including a program for use with an electronic device, the program being executable by a processor to perform the steps of:
determining a first image picture and a second image picture of the same human body part, wherein one of the first image picture and the second image picture is an image picture when a target object in the human body part is not subjected to development marking, and the other of the first image picture and the second image picture is an image picture when the target object in the human body part is subjected to development marking;
extracting a characteristic region from the first image picture and the second image picture; screening characteristic points meeting preset gray scale conditions from the first image picture and the second image picture; determining a characteristic region containing the characteristic points according to the characteristic points;
matching the characteristic region of the first image picture with the characteristic region of the second image picture according to a preset characteristic condition, including: processing each characteristic region, determining at least one gray characteristic vector of the characteristic region along a target direction corresponding to the characteristic point contained in the characteristic region, and determining at least one gray characteristic vector of the characteristic region along a direction perpendicular to the target direction; matching each gray scale feature vector of a feature region in the first image picture with each gray scale feature vector of a feature region of the second image picture according to the similarity of the gray scale feature vectors, wherein the target direction is determined according to the gray scale change information in the feature region;
And carrying out affine transformation on the first image picture according to the matched characteristic region between the first image picture and the second image picture to obtain the same distribution position of the matched characteristic region between the transformed first image picture and the second image picture.
CN201811530621.5A 2018-12-14 2018-12-14 Medical image picture processing method, device and apparatus Active CN109598751B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811530621.5A CN109598751B (en) 2018-12-14 2018-12-14 Medical image picture processing method, device and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811530621.5A CN109598751B (en) 2018-12-14 2018-12-14 Medical image picture processing method, device and apparatus

Publications (2)

Publication Number Publication Date
CN109598751A CN109598751A (en) 2019-04-09
CN109598751B true CN109598751B (en) 2023-05-23

Family

ID=65962466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811530621.5A Active CN109598751B (en) 2018-12-14 2018-12-14 Medical image picture processing method, device and apparatus

Country Status (1)

Country Link
CN (1) CN109598751B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109712121B (en) * 2018-12-14 2023-05-23 复旦大学附属华山医院 Medical image picture processing method, device and apparatus

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101455576A (en) * 2007-12-12 2009-06-17 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic wide-scene imaging method, device and system
CN103337077A (en) * 2013-07-01 2013-10-02 武汉大学 Registration method for visible light and infrared images based on multi-scale segmentation and SIFT (Scale Invariant Feature Transform)
CN103714547A (en) * 2013-12-30 2014-04-09 北京理工大学 Image registration method combined with edge regions and cross-correlation
CN104778679A (en) * 2014-12-22 2015-07-15 中国科学院遥感与数字地球研究所 Gaofen-1 satellite data-based control point graphic element rapid-matching method
CN106257497A (en) * 2016-07-27 2016-12-28 中测高科(北京)测绘工程技术有限责任公司 The matching process of a kind of image same place and device
CN107967477A (en) * 2017-12-12 2018-04-27 福州大学 A kind of improved SIFT feature joint matching process
CN108241645A (en) * 2016-12-23 2018-07-03 腾讯科技(深圳)有限公司 Image processing method and device
CN109712121A (en) * 2018-12-14 2019-05-03 复旦大学附属华山医院 A kind of method, equipment and the device of the processing of medical image picture

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4435867B2 (en) * 2008-06-02 2010-03-24 パナソニック株式会社 Image processing apparatus, method, computer program, and viewpoint conversion image generation apparatus for generating normal line information
DE102015208929B3 (en) * 2015-05-13 2016-06-09 Friedrich-Alexander-Universität Erlangen-Nürnberg Method for 2D-3D registration, computing device and computer program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101455576A (en) * 2007-12-12 2009-06-17 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic wide-scene imaging method, device and system
CN103337077A (en) * 2013-07-01 2013-10-02 武汉大学 Registration method for visible light and infrared images based on multi-scale segmentation and SIFT (Scale Invariant Feature Transform)
CN103714547A (en) * 2013-12-30 2014-04-09 北京理工大学 Image registration method combined with edge regions and cross-correlation
CN104778679A (en) * 2014-12-22 2015-07-15 中国科学院遥感与数字地球研究所 Gaofen-1 satellite data-based control point graphic element rapid-matching method
CN106257497A (en) * 2016-07-27 2016-12-28 中测高科(北京)测绘工程技术有限责任公司 The matching process of a kind of image same place and device
CN108241645A (en) * 2016-12-23 2018-07-03 腾讯科技(深圳)有限公司 Image processing method and device
CN107967477A (en) * 2017-12-12 2018-04-27 福州大学 A kind of improved SIFT feature joint matching process
CN109712121A (en) * 2018-12-14 2019-05-03 复旦大学附属华山医院 A kind of method, equipment and the device of the processing of medical image picture

Also Published As

Publication number Publication date
CN109598751A (en) 2019-04-09

Similar Documents

Publication Publication Date Title
US11576578B2 (en) Systems and methods for scanning a patient in an imaging system
US20150117740A1 (en) Method and apparatus for metal artifact elimination in a medical image
US20010048757A1 (en) Method and apparatus for matching positions of images
CN111291736B (en) Image correction method and device and medical equipment
CN112017231B (en) Monocular camera-based human body weight identification method, monocular camera-based human body weight identification device and storage medium
EP2854646A1 (en) Methods and apparatus for estimating the position and orientation of an implant using a mobile device
CN106456084B (en) Ultrasonic imaging apparatus
CN111340749B (en) Image quality detection method, device, equipment and storage medium
US20160335523A1 (en) Method and apparatus for detecting incorrect associations between keypoints of a first image and keypoints of a second image
CN114520894B (en) Projection area determining method and device, projection equipment and readable storage medium
CN109712121B (en) Medical image picture processing method, device and apparatus
JP2019533232A (en) Pattern detection
CN112464829B (en) Pupil positioning method, pupil positioning equipment, storage medium and sight tracking system
Schweitzer et al. Aspects of 3D surface scanner performance for post-mortem skin documentation in forensic medicine using rigid benchmark objects
CN109598751B (en) Medical image picture processing method, device and apparatus
CN111723836A (en) Image similarity calculation method and device, electronic equipment and storage medium
Lundin et al. Automatic registration of 2D histological sections to 3D microCT volumes: Trabecular bone
JP2016209399A (en) Image processing device and method, and computer program
CN112017148A (en) Method and device for extracting single-joint skeleton contour
CN115984203A (en) Eyeball protrusion measuring method, system, terminal and medium
JP2005270635A (en) Method for processing image and device for processing image
CN112581460B (en) Scanning planning method, device, computer equipment and storage medium
US8340384B2 (en) Method and apparatus for cerebral hemorrhage segmentation
CN114092399A (en) Focus marking method, device, electronic equipment and readable storage medium
CN113936055A (en) Train brake shoe residual thickness measuring method, system, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200813

Address after: Room 615, comprehensive south building, room 9, No. 1699, Zuchongzhi South Road, Yushan Town, Kunshan City, Suzhou City, Jiangsu Province

Applicant after: Qianglian Zhichuang (Suzhou) Medical Technology Co.,Ltd.

Address before: Room 2621, Building 2, Sobao Business Center, 16 South Third Ring West Road, Fengtai District, Beijing 100071

Applicant before: UNION STRONG (BEIJING) TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant