CN112184683A - Ultrasonic image identification method, terminal equipment and storage medium - Google Patents

Ultrasonic image identification method, terminal equipment and storage medium Download PDF

Info

Publication number
CN112184683A
CN112184683A CN202011073633.7A CN202011073633A CN112184683A CN 112184683 A CN112184683 A CN 112184683A CN 202011073633 A CN202011073633 A CN 202011073633A CN 112184683 A CN112184683 A CN 112184683A
Authority
CN
China
Prior art keywords
probability
ultrasound image
map
candidate
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011073633.7A
Other languages
Chinese (zh)
Inventor
林泽慧
杨鑫
高睿
李浩铭
庄加华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Duying Medical Technology Co ltd
Original Assignee
Shenzhen Duying Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Duying Medical Technology Co ltd filed Critical Shenzhen Duying Medical Technology Co ltd
Priority to CN202011073633.7A priority Critical patent/CN112184683A/en
Publication of CN112184683A publication Critical patent/CN112184683A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The invention discloses an ultrasonic image identification method, terminal equipment and a storage medium, wherein the method comprises the steps of obtaining a probability mapping characteristic diagram corresponding to an ultrasonic image to be identified; determining and acquiring a plurality of candidate feature points corresponding to the ultrasonic image based on the probability mapping feature map; and updating the probability mapping characteristic map based on the candidate characteristic points, and determining a target area corresponding to the ultrasonic image based on the updated probability mapping characteristic map. According to the method, the probability mapping characteristic map of the ultrasonic image is obtained, a plurality of candidate characteristic points are obtained on the basis of the probability mapping characteristic map, the candidate characteristic points are corrected to update the probability mapping characteristic map, and finally, the ovarian follicles in the ultrasonic image are segmented. Therefore, the accuracy of the target region determined based on the probability mapping characteristic map can be improved, and a doctor can better observe and analyze the development condition of the ovarian follicles.

Description

Ultrasonic image identification method, terminal equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an ultrasound image recognition method, a terminal device, and a storage medium.
Background
At present, in clinic, doctors often observe the size and volume of ovaries of women in an ovulation cycle and the development condition of follicles through ultrasound to predict the formation period of mature oocytes, so that the success rate of reproduction is improved. However, in the process of statistical analysis, doctors need to manually calculate the number of follicles frequently and in large quantities, measure the maximum meridian and the perpendicular bisector of the follicles, and the like. Moreover, follicle monitoring is real-time and continuous, and the final monitoring effect can be influenced by human errors caused by different habits of manual marking measurement of different doctors. Together with whether the anechoic structures in the image are follicles and whether the anechoic structures are only one follicle or two adjacent follicles, these are difficult points in manual labeling, especially for physicians with less clinical experience.
Disclosure of Invention
In view of the shortcomings in the prior art, the technical problem to be solved by the present invention is to provide an ultrasound image identification method, a terminal device and a storage medium.
In order to solve the above problem, a first aspect of the present invention provides an ultrasound image identification method, including:
acquiring a probability mapping characteristic map corresponding to an ultrasonic image to be identified;
determining and acquiring a plurality of candidate feature points corresponding to the ultrasonic image based on the probability mapping feature map;
updating the probability mapping feature map based on the candidate feature points, and determining a target region corresponding to the ultrasound image based on the updated probability mapping feature map, wherein the target region includes an ovary and/or a follicle.
The method for identifying an ultrasound image includes a two-dimensional/three-dimensional ultrasound image, and the ultrasound image includes a follicle and/or an ovary.
The identification method of the ultrasound image, wherein the acquiring of the probability mapping feature map corresponding to the ultrasound image to be identified specifically includes:
inputting the ultrasonic image into a trained segmentation network model, and determining a probability mapping characteristic map corresponding to the ultrasonic image through the segmentation network model.
The method for identifying an ultrasound image, wherein the determining to obtain a plurality of candidate feature points corresponding to the ultrasound image based on the probability mapping feature map specifically includes:
obtaining the confidence of the object type corresponding to each pixel point in the probability mapping characteristic diagram;
and determining a plurality of candidate feature points corresponding to the ultrasonic image based on the confidence of the object type corresponding to each pixel point.
The method for identifying the ultrasonic image comprises the step of determining a confidence value of an object class corresponding to each candidate feature point in a plurality of candidate feature points, wherein the difference value between the confidence value of the object class corresponding to each candidate feature point in the plurality of candidate feature points and a preset confidence value meets a preset condition.
The method for identifying an ultrasound image, wherein the updating the probability mapping feature map based on the candidate feature points specifically includes:
acquiring a characteristic map corresponding to the ultrasonic image, wherein the characteristic map comprises image detail information of the ultrasonic image;
determining feature vectors corresponding to the candidate feature points based on the feature map, and determining probability vectors corresponding to the candidate feature points based on the rough probability feature map;
and updating the probability mapping characteristic graph based on the characteristic vector and the probability vector corresponding to each candidate characteristic point.
The method for identifying an ultrasound image, wherein the updating the probability mapping feature map based on the feature vector and the probability vector specifically includes:
determining fusion vectors corresponding to the candidate feature points based on the feature vectors and the probability vectors corresponding to the candidate feature points;
determining a target probability vector corresponding to each candidate feature point based on an iterative subdivision algorithm and a fusion vector corresponding to each candidate feature point;
and updating the probability vectors corresponding to the candidate characteristic points by adopting the target probability vectors corresponding to the candidate characteristic points so as to update the probability mapping characteristic map.
The method for identifying the ultrasonic image, wherein the image size of the feature map is equal to the image size of the probability mapping feature map.
A second aspect of this embodiment provides a terminal device, where the terminal device includes: the identification method comprises a memory, a processor and an identification program of the ultrasonic image, wherein the identification program of the ultrasonic image is stored on the memory and can run on the processor, and the identification program of the ultrasonic image realizes the steps of the identification method of the ultrasonic image as described in any one of the above when being executed by the processor.
A third aspect of the present embodiment provides a computer-readable medium, wherein the storage medium stores an ultrasound image identification program, and the ultrasound image identification program, when executed by a processor, implements the steps of the ultrasound image identification method as described in any one of the above.
Has the advantages that: the invention provides an ultrasonic image identification method, terminal equipment and a storage medium, wherein the method comprises the steps of obtaining a probability mapping characteristic diagram corresponding to an ultrasonic image to be identified; determining and acquiring a plurality of candidate feature points corresponding to the ultrasonic image based on the probability mapping feature map; and updating the probability mapping characteristic map based on the candidate characteristic points, and determining a target area corresponding to the ultrasonic image based on the updated probability mapping characteristic map. According to the method, the probability mapping characteristic map of the ultrasonic image is obtained, the candidate characteristic points are determined based on the probability mapping characteristic map, and the candidate characteristic points are corrected to update the probability mapping characteristic map, so that the accuracy of the target area determined based on the probability mapping characteristic map can be improved, and a doctor can better observe and analyze the development condition of ovarian follicles.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of the method for identifying an ultrasound image of the present invention;
fig. 2 is a schematic operating environment diagram of a terminal device according to a preferred embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The inventor finds that, at present, in clinic, doctors often observe the size and volume of ovaries of women in an ovulation cycle and the development condition of follicles through ultrasound to predict the formation period of mature oocytes, so that the success rate of reproduction is improved. However, in the process of statistical analysis, doctors need to manually calculate the number of follicles frequently and in large quantities, measure the maximum meridian and the perpendicular bisector of the follicles, and the like. Moreover, follicle monitoring is real-time and continuous, and the final monitoring effect can be influenced by human errors caused by different habits of manual marking measurement of different doctors. Together with whether the anechoic structures in the image are follicles and whether the anechoic structures are only one follicle or two adjacent follicles, these are difficult points in manual labeling, especially for physicians with less clinical experience.
In order to solve the above problem, in the embodiment of the present application, a probability mapping feature map corresponding to an ultrasound image to be identified is obtained; determining and acquiring a plurality of candidate feature points corresponding to the ultrasonic image based on the probability mapping feature map; and updating the probability mapping characteristic map based on the candidate characteristic points, and determining a target area corresponding to the ultrasonic image based on the updated probability mapping characteristic map. According to the method, the probability mapping characteristic map of the ultrasonic image is obtained, the candidate characteristic points are determined based on the probability mapping characteristic map, and the candidate characteristic points are corrected to update the probability mapping characteristic map, so that the accuracy of the target area determined based on the probability mapping characteristic map can be improved, and a doctor can better observe and analyze the development condition of ovarian follicles.
The following further describes the content of the application by describing the embodiments with reference to the attached drawings.
The embodiment provides an ultrasound image identification method, which can be executed by an ultrasound image identification apparatus, which can be implemented by software and applied to a terminal device such as an ultrasound device, a smart phone, a tablet computer or a personal digital assistant. Referring to fig. 1, the present embodiment provides a method for identifying an ultrasound image, the method including:
and S10, acquiring a probability mapping characteristic map corresponding to the ultrasonic image to be identified.
Specifically, the ultrasound image may be acquired by an ultrasound device, acquired by an external device and sent to a terminal device running the ultrasound image identification method, or stored locally by the terminal device running the ultrasound image identification method. In one implementation of this embodiment, the ultrasound image may be a two-dimensional/three-dimensional ultrasound image, wherein the ultrasound image includes ovaries and/or follicles. It will be appreciated that the ultrasound image may be a two/three dimensional ultrasound image carrying the ovary and/or follicle.
The probability mapping feature map is used for reflecting the probability that the object corresponding to each pixel point in the ultrasonic image has a category, wherein the object category is included in a plurality of preset object categories, and the object categories are used for marking the object categories corresponding to the objects in the ultrasonic image. For example, the plurality of target classes includes a follicle class and an ovary class, and the target region includes a subject class that is one of the follicle class and the ovary class.
The element values of each pixel point position in the probability mapping characteristic diagram corresponding to each channel are a probability value, and the probability value is used for representing the probability that the pixel point belongs to a candidate category corresponding to the channel, wherein the candidate category is one category in a category set formed by a plurality of target categories and background categories. Thus, the number of channels of the probability map feature map is determined according to several object classes, wherein the number of channels of the probability map feature map may be equal to the sum of the number of object classes and the number of backgrounds. For example, if several target classes include ovary and follicle, then the number of channels of the probability map feature map may be 3, the probability map feature map dimension for a two-dimensional ultrasound image is 3, and the three-dimensional image dimension is 4.
In one implementation manner of this embodiment, the identification method of the ultrasound image may be applied to a segmentation network model, which is used for identifying a target region corresponding to an object in the ultrasound image and an object class, where the object class is included in several target classes. It is understood that the image recognition model is configured with the several object classes and labels objects in the ultrasound image with the several object classes. Accordingly, the probability mapping feature map can be obtained by segmenting the network model, and accordingly, the obtaining of the probability mapping feature map corresponding to the ultrasound image to be identified specifically includes:
inputting the ultrasonic image into a trained segmentation network model, and determining a probability mapping characteristic map corresponding to the ultrasonic image through the segmentation network model.
Specifically, the segmentation network model is a trained network model, the input item of the segmentation network model is an ultrasound image, and the segmentation network model can acquire a probability mapping feature map corresponding to the ultrasound image. The image size of the probability mapping feature map determined by the segmentation network model may be the same as the image size of the ultrasound image, or may not be the same as the image size of the ultrasound image, and when the image size of the probability mapping feature map determined by the segmentation network model is different from the image size of the ultrasound image, the probability mapping feature map may be upsampled (for example, linear interpolation is performed, or the like), and the upsampled probability mapping feature map is used as the probability mapping feature map corresponding to the ultrasound image. In a specific implementation manner of this embodiment, the image size of the probability map feature determined by the segmentation network model is half of the image size of the ultrasound image, for example, the image size of the ultrasound image is 256 × 256, and the image size of the probability map feature is 128 × 128.
S20, determining and acquiring a plurality of candidate feature points corresponding to the ultrasonic image based on the probability mapping feature map.
Specifically, each candidate feature point in the candidate feature points corresponds to a pixel position in the probability mapping feature map, and the pixel positions corresponding to the candidate feature points in the candidate feature points are different. For example, the candidate feature points include a candidate feature point a and a candidate feature point b, the pixel position corresponding to the candidate feature point a is (100, 100, 50), and the pixel position corresponding to the candidate feature point is (200, 200, 100). In addition, each candidate feature point is a probability vector comprising a plurality of elements, the number of the elements included in each probability vector is equal to the number of channels of the probability mapping feature map, probability values in the channels corresponding to the positions of the pixels in the plurality of elements are in one-to-one correspondence, and the corresponding elements and the probability values are the same. For example, the candidate feature points of the three-dimensional data include a candidate feature point a, the pixel position corresponding to the candidate feature point a is (100, 100, 50), the number of channels of the probability map feature map is 3, the probability value of channel 0 corresponding to the pixel position (100, 100, 50) in the probability map feature map is 0.1, the probability value of channel 1 is 0.2, the probability value of channel 2 is 0.7, and then the probability vector corresponding to the candidate feature point a is (0.1, 0.2, 0.7).
Further, after determining the probability vector corresponding to each candidate feature point, the object class corresponding to the candidate feature point and the confidence corresponding to the object class may be determined based on the probability vector, where the confidence is used to reflect the confidence of the object class corresponding to the candidate feature point, where a higher confidence indicates a higher confidence of the object class corresponding to the candidate feature point, and conversely, a lower confidence indicates a lower confidence of the object class corresponding to the candidate feature point. In one implementation manner of this embodiment, the confidence may be any value between 0 and 1, where 0 to 1 includes 0 and 1.
In an implementation manner of this embodiment, the determining to obtain a plurality of candidate feature points corresponding to the ultrasound image based on the probability mapping feature map specifically includes:
obtaining the confidence of the object type corresponding to each pixel point in the probability mapping characteristic diagram;
and determining a plurality of candidate feature points corresponding to the ultrasonic image based on the confidence of the object type corresponding to each pixel point.
In particular, the subject class is one of several target classes, for example, several classes including ovary and follicle, and the subject class may be either ovary or follicle. The confidence coefficient is the credibility of the pixel point as the object type. In addition, after the confidence of the object type corresponding to each pixel point is obtained, a plurality of candidate feature points can be selected from the probability mapping feature map based on the confidence. The boundary of the ovary and the boundary of the follicle, and the boundary of the follicle in the ultrasound image are overlapped, so that errors occur in the area range of the target area. Therefore, after a plurality of candidate feature points are selected, a plurality of candidate feature points can be selected from the boundary points of the target area, pixel points with the confidence degree meeting the preset condition in the target area can be selected as the candidate feature points, and pixel points with the confidence degree meeting the preset condition can be selected from the boundary points of the target area as the candidate feature points.
In an implementation manner of this embodiment, a difference between a confidence of an object class corresponding to each of the plurality of candidate feature points and a preset confidence satisfies a preset condition, where the preset confidence may be a preset criterion used for selecting candidate feature points, for example, the preset confidence is 0.5, 0.6, and the like. The preset condition may be that a difference between the confidence level and the preset confidence level is smaller than a preset difference, for example, 0.05. Certainly, in practical applications, the plurality of candidate feature points may also adopt other manners, for example, first obtaining boundary pixel points of the target area, randomly selecting the plurality of candidate feature points from the boundary pixel points, and a number point difference value of the boundary points between any two adjacent groups of the plurality of candidate feature points satisfies a preset requirement, so that uniformity of the candidate feature points can be improved, and accuracy of determining the target area points based on the corrected probability mapping feature map is improved, wherein the preset requirement may be that the number point difference value of the point boundary points between any two adjacent groups of the pixel points is smaller than a preset number, for example, 3, 4, and the like.
And S30, updating the probability mapping feature map based on the candidate feature points, and determining a target region corresponding to the ultrasound image based on the updated probability mapping feature map, wherein the target region comprises an ovary and/or a follicle.
Specifically, the updating the probability mapping feature map based on the candidate feature points refers to updating object categories corresponding to the candidate feature points and confidence degrees of the object categories, and replacing a plurality of candidate feature points in the probability mapping map with the updated candidate feature points to update the probability mapping feature map. Therefore, when the probability map feature map is updated based on the candidate feature points, each candidate feature point needs to be updated. Correspondingly, the updating the probability mapping feature map based on the candidate feature points specifically includes:
acquiring a characteristic diagram corresponding to the ultrasonic image;
determining feature vectors corresponding to the candidate feature points based on the feature map, and determining probability vectors corresponding to the candidate feature points based on the rough probability feature map;
and updating the probability mapping characteristic graph based on the characteristic vector and the probability vector corresponding to each candidate characteristic point.
In particular, the feature map comprises image detail information of the ultrasound image, which feature map can be determined by segmenting a network model. That is, after the ultrasound image is input into the segmentation network model, the segmentation network model can determine a feature map corresponding to the ultrasound image. In an implementation manner of this embodiment, the feature vector may be composed of features extracted from one of the feature maps, or may be obtained by connecting a plurality of pyramid features of a pyramid network. In a specific implementation manner of this embodiment, the network layer corresponding to the feature map is adjacent to the network layer corresponding to the probability mapping feature map, and the network layer corresponding to the feature map is located before the network layer corresponding to the probability mapping feature map, so that the feature map learns the image detail information.
The feature vector may be constructed based on values of elements in each channel of the feature map at pixel locations in the feature map. The image size of the feature map may or may not be the same as the image size of the ultrasound image, for example, the image size of the feature map is half the image size of the ultrasound image. In addition, when the image size of the feature map is different from the image size of the ultrasound image, the image size of the feature map may be made the same as the image size of the ultrasound image by up-sampling (e.g., linear interpolation, etc.) the feature map. Therefore, the image size of the feature map is the same as that of the probability mapping feature map, so that for each candidate feature point, a feature vector exists in the feature map, and the pixel position corresponding to the feature vector is the same as the pixel position corresponding to the candidate feature point. For example, the candidate feature points of the three-dimensional data include a candidate feature point a, and the pixel position corresponding to the candidate feature point a is (100, 100, 50), then there is a feature vector 1 in the feature map, and the pixel position corresponding to the feature vector 1 is (100, 100, 50).
In one implementation of the embodiment, in order to increase the attention of the object in the ultrasound image, the attention weight may be configured along two dimensions of space and channel when the feature map is acquired, and the feature may be adjusted by multiplying the attention weight and the feature map. When the method for identifying an ultrasound image provided by this embodiment is applied to a segmented network model, the segmented network model may be configured with an attention mechanism module, and the attention mechanism module is embedded in a basic convolution module of the segmented network model, so that attention to an object in the ultrasound image may be realized without increasing the complexity of the model. In addition, the attention mechanism module may report a channel attention unit and a spatial attention unit, and the spatial attention unit and the channel attention unit may be in parallel or may be combined in series. In a specific implementation of this embodiment, the channel attention unit and the spatial attention unit are connected in series in such a way that the channel attention unit precedes the spatial attention unit.
In an implementation manner of this embodiment, the updating the probability mapping feature map based on the feature vector and the probability vector specifically includes:
determining fusion vectors corresponding to the candidate feature points based on the feature vectors and the probability vectors corresponding to the candidate feature points;
determining a target probability vector corresponding to each candidate feature point based on an iterative subdivision algorithm and a fusion vector corresponding to each candidate feature point;
and updating the probability vectors corresponding to the candidate characteristic points by adopting the target probability vectors corresponding to the candidate characteristic points so as to update the probability mapping characteristic map.
Specifically, the fusion vector is determined based on the feature vector and the probability vector, wherein the feature vector and the probability vector may be spliced to obtain the fusion vector, so that the fusion vector includes shape information of an object (e.g., ovary, follicle, etc.) carried by the feature vector and context information (e.g., shape, texture, position, size, etc. of ovarian follicle) provided by the probability vector, and thus, by learning and adaptively identifying candidate feature points through feature-rich representation with coarse granularity and fine granularity, the boundary prediction effect of the ultrasound image may be improved.
The iterative subdivision algorithm may be executed by: and for the fusion vector of each selected candidate feature point, performing label re-prediction on the candidate feature point through a classification network. The classification network adopts an iterative subdivision algorithm, and can comprise a plurality of layers of perceptrons. Of course, in practical applications, other methods are also used to perform label re-prediction on the candidate feature points, for example, Support Vector Machine (SVM), naive bayes classification, and the like. The iterative subdivision algorithm in this embodiment is a multi-layer perceptron, and learns and classifies the shared weight for all candidate feature point (all regions) features.
In an implementation manner of this embodiment, when the ultrasound image identification method is applied to a segmentation network model, the segmentation network model may be configured with a feature extraction module, and the feature extraction module is configured to extract a feature map of an ultrasound image, it can be understood that an input item of the segmentation network model is an ultrasound image, and a segmentation network may determine that the feature map corresponding to the ultrasound image and a probability mapping feature map have been obtained, and select a plurality of candidate feature points in the probability mapping feature map, and update the candidate feature points based on the feature map so as to update the probability mapping feature map. In addition, the attention mechanism module is configured to the feature extraction module, and the attention mechanism module enables the feature extraction module to perform channel attention and spatial attention on the extracted feature map. In an implementation manner of this embodiment, the two-dimensional image of the feature extraction module uses a two-dimensional segmentation convolutional neural network, and the three-dimensional image uses a three-dimensional segmentation convolutional neural network, so that the feature extraction module can capture time sequence and spatial feature information of two-dimensional/three-dimensional data, and simultaneously uses a hopping connection layer and a hole convolution, so that more image characterization features can be retained.
The training process of the segmentation network model can be as follows:
acquiring a training sample set, wherein the training sample set comprises a plurality of training ultrasonic images;
and training a preset network model based on the ultrasonic image in the training sample set to obtain the segmentation network model.
Specifically, the segmentation network model is used to execute the ultrasound image identification method according to the above embodiment, and the model structure of the preset network model is the same as the model structure of the segmentation network model. The model parameters of the preset network model are different from those of the segmentation network model, the model parameters of the preset network model are initial model parameters, and the model parameters of the segmentation network model are model parameters obtained through training of training ultrasonic images in the training sample set.
Each ultrasound image in the training sample set is a two-dimensional/three-dimensional ultrasound image, the ultrasound image comprising an ovary and/or a follicle, optionally the ultrasound image comprising an ovary and a follicle. In addition, the ultrasound image is labeled, wherein the labeled ultrasound image includes labeling information, and the labeling information includes the shape, size and position of the ovary and each follicle in the training ultrasound image. In practical applications, the shape, size and position of the ovary and each follicle in the training ultrasound image can be segmented and labeled by manual labeling software from different planar angles, wherein the ovary and the follicle are respectively labeled with different segmentation maps (for example, different colors are selected to label each part), and the segmentation map of the background is added, so that a total of 3 segmentation maps is obtained.
After the training sample set is obtained, the training ultrasound image in the training sample set can be preprocessed, so that the segmentation network model can learn the characteristics of the segmentation target. The preprocessing method can be standardized processing cutting, resampling, dimension adjustment, standardized processing and the like, wherein the standardized processing is processing for removing the mean value of data in the training ultrasonic image to realize centralization, the centralization of the data accords with a data distribution rule, the generalization capability of a model can be increased, and the background of the ultrasonic image can be weakened in visual effect. The above-described pretreatment method is only an example of the present invention, and the pretreatment used in the present invention is not limited to the above-described method.
In one implementation of the embodiment, it is difficult to acquire a large-scale image tag due to difficulty in labeling the ultrasound image. The large-scale data is beneficial to improving the training of the network model, improving the characteristic learning of the network to the target, remarkably improving the model prediction precision and avoiding overfitting. Therefore, after the acquired training sample set, enhancement processing can be performed on the training sample set, wherein the enhancement processing can include random scaling, random rotation, random mirror image and the like on a cross section. Can promote the model prediction accuracy like this, avoid the model to take place the overfitting when training, improve the edge recognition to the ovarian follicle, more effective differentiation hydrops and follicle.
In an implementation manner of this embodiment, after the training ultrasound image is input to the preset network model, a prediction area corresponding to the training ultrasound image is determined, and the preset network model is trained based on the prediction area and an annotation area corresponding to the training ultrasound image. The process of determining the prediction region is the same as the process of determining the target region of the ultrasound image in the above embodiment, for example, the probability mapping feature map of the training ultrasound image is determined first, a plurality of candidate feature points are determined based on the probability mapping feature map, then the probability mapping feature map is updated based on the candidate feature points, and the prediction region is determined based on the updated probability mapping feature map. The selection process of the preset network model for selecting the candidate feature points is different from the selection process of the segmentation network model for selecting the candidate feature points.
In an implementation manner of this embodiment, the selection process of the preset network model for selecting a plurality of candidate feature points is to obtain candidate probability values of each target category corresponding to each pixel point in the probability mapping feature map; determining a plurality of candidate feature points corresponding to the ultrasonic image based on candidate probability values of target categories corresponding to the pixel points, wherein at least two candidate probability values exist in the candidate probability values corresponding to each candidate feature point in the candidate feature points, and the difference value of the two candidate probability values meets a preset condition.
In another implementation manner of this embodiment, the preset network model selects a plurality of candidate feature points by randomly sampling kN candidate points (where k >1, for example, k ═ 3, etc.) in the probability mapping feature map, and selecting β N candidate points β ∈ [0,1] (β ═ 0.7 in this patent setting) from the kN candidate points as the plurality of candidate feature points, where the remaining (1- β) N candidate points are subject to uniform distribution sampling. Wherein, N is a positive integer, and the value of N can be determined according to actual requirements.
In the embodiment, when the object region is determined, a plurality of candidate feature points are obtained, and the candidate feature points are subjected to iterative subdivision algorithm re-prediction based on the feature map and the probability mapping feature map, so that the segmentation network is driven to better identify the boundary points. In addition, the preset network model adds contrast learning in the training process, and meanwhile, in order to improve the confidence degrees of a plurality of selected candidate feature points, point-to-point contrast loss is adopted, and the contrast learning loss is established by maximizing the divergence among feature vectors of different categories and minimizing the divergence among feature vectors belonging to the same category but with different confidence degrees; the divergence of the two types of feature vectors is maximized, and the divergence of the same type of feature vector is minimized. The network comprises losses of a plurality of different tasks, adaptive learning is adopted, the loss function weight of each task is automatically adjusted by the network, and the feature learning rate and the optimization strength of different tasks are balanced. In addition, a fine-grained analysis and attention mechanism module is added into a preset network model, so that the characteristic of interference is removed, the attention of the network to effective characteristics is improved, more distinguishing characteristics are learned, and the re-prediction of the network to candidate characteristic points is improved.
Based on this, after the two-dimensional/three-dimensional ultrasound image with the ovarian follicles is input into the segmentation network obtained by training, the segmentation network can distinguish the common background and the ovarian follicles and perform preliminary segmentation to segment the shape outlines of the ultrasound image data of the ovaries and the ovarian follicles by learning the effective characteristics (the effective characteristic values are the characteristics capable of distinguishing the ovarian follicles from the image background, such as the shape information, the position information, the semantic information and the like of the ovarian follicles) of the ovarian follicles in the two-dimensional/three-dimensional ultrasound image. After the information related to the shape contour of the ultrasound image of the ovary or the follicle is segmented in the segmentation network model, the probability mapping feature map is updated based on the plurality of candidate feature points by obtaining the plurality of candidate feature points in the embodiment, so that the accuracy of the target area determined by the segmentation network model can be improved.
In an implementation process of this embodiment, reinforcement learning is added in the training process of the preset network model, which can deeply supervise feature learning of the middle feature layer, adaptive learning is added at the same time, weights of loss functions of different tasks are automatically adjusted, and weights of loss functions of different tasks are set by considering the uncertainty of the covariance between each task. Meanwhile, since the multi-layered perceptron predicts the segmentation label for each point, the segmentation loss through a specific task is trainedAnd (5) refining. In one implementation of the embodiment, the cross entropy loss functions of the multi-layer perceptron over all selected fuzzy points are added up, and the loss is defined as a fuzzy point loss function. Thus, the split network loss function consists of four parts: loss function L of coarse segmentationSPoint-summed re-prediction loss function LPDeep supervised loss function LDAnd the contrast loss function Lc. The calculation formula of the total Loss function Loss:
Loss=w1×LS+w2×LP+w3×LD+w4×Lc
wherein, w1,w2,w3And w4Representing weights, for example, 0.4, 0.3, 0.01, and 0.3, respectively.
In addition, after the target area is determined, post-processing may be performed on the acquired target area, where the post-processing includes: corrosion, removal of the minimum connected component, expansion treatment, and the like. This is because in the ultrasound image, the ovary may be in close contact with the follicle, the boundary is fuzzy, the shape of the individual follicle is irregular due to squeezing, and identification is difficult, so in order to better deal with the problem of adhesion of the follicle, the present example performs erosion, removal of the minimum connected component, expansion and other treatments on the ultrasound image pair, and improves the accuracy of identification of the edge. The form of ovarian follicular post-processing of ultrasound image data in the present invention is not limited to the above representation.
In an implementation manner of this embodiment, after the target area is obtained, the target area is reconstructed based on the three-dimensional curved surface, and the target area is rendered.
Specifically, in order to obtain a better visualization effect and enhance the contrast between the ovarian follicles and the background, the two-dimensional ovarian follicles render the two-dimensional segmentation result based on the contour information; the ovarian follicles are reconstructed from the three-dimensional segmentation result based on the three-dimensional curved surface, and the three-dimensional segmentation result is rendered, for example, the clear boundary, shape and size prediction is performed on the ovaries and the follicles with different scales through the segmentation network, so that doctors can observe and analyze the development condition of the ovarian follicles better. The reconstruction and rendering method of the ovarian follicle segmentation result of the ultrasound image data in the invention is not limited to the above representation.
The invention automatically segments the shapes of the follicles and the ovaries by utilizing a deep learning algorithm, detects fuzzy points of the ovarian follicle boundary by utilizing a correction algorithm (an iterative subdivision algorithm, fine-grained analysis, an attention mechanism module, reinforcement learning, contrast learning and the like) on the basis, carries out detailed prediction on the fuzzy boundary, removes abnormal values in a detection result, and further improves the accuracy and smoothness of the edge detection of the follicles and the ovaries.
According to the invention, the input two-dimensional/three-dimensional ultrasound image data of the ovarian follicles are automatically segmented, the correction algorithm is added to improve the edge recognition of the ovarian follicles, the result is reconstructed, and doctors can conveniently observe and analyze the development conditions of the ovarian follicles. The invention can provide support for more clinical tasks, such as parameter measurement, acquisition of optimal section, prediction of follicle maturation time, calculation of follicle number and the like. According to the ovarian follicle ultrasound image segmentation method, a deep learning model ovarian follicle ultrasound data automatic segmentation model is used, a traditional machine learning method or a traditional image processing method pays attention to space and time sequence information of data, the segmentation effect is more accurate, and the generalization capability of the model is stronger. The correction algorithm and the rendering method added in the invention can further improve the edge identification of ovarian follicles, better solve the problem of sticky follicles and facilitate observation and analysis of doctors.
Furthermore, the algorithm is not limited to the re-prediction based on fuzzy points, fine-grained analysis, an attention mechanism module, contrast learning and reinforcement learning, and the substitution scheme also comprises antagonism learning, iterative refinement algorithm and the like.
Further, as shown in fig. 2, based on the above method for identifying an ultrasound image, the present invention also provides a terminal device, which includes a processor 10, a memory 20 and a display 30. Fig. 2 shows only some of the components of the terminal device, but it is to be understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead.
The memory 20 may in some embodiments be an internal storage unit of the terminal device, such as a hard disk or a memory of the terminal device. In other embodiments, the memory 20 may also be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal device. Further, the memory 20 may also include both an internal storage unit and an external storage device of the terminal device. The memory 20 is used for storing application software installed in the terminal device and various data, such as program codes of the installed terminal device. The memory 20 may also be used to temporarily store data that has been output or is to be output. In one embodiment, the memory 20 stores a two-dimensional/three-dimensional image processing program 40, and the two-dimensional/three-dimensional image processing program 40 can be executed by the processor 10 to implement the method for identifying an ultrasound image in the present application.
The processor 10 may be a Central Processing Unit (CPU), microprocessor or other data Processing chip in some embodiments, and is used for executing program codes stored in the memory 20 or Processing data, such as performing the identification method of the ultrasound image.
The display 30 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch panel, or the like in some embodiments. The display 30 is used for displaying information at the terminal device and for displaying a visual user interface. The components 10 to 30 of the terminal device communicate with each other via a system bus.
In one embodiment, the steps of the two-dimensional/three-dimensional image recognition method described in the above embodiments are implemented when the processor 10 executes the two-dimensional/three-dimensional image recognition program 40 in the memory 20.
The present invention also provides a computer readable medium, wherein the storage medium stores a two-dimensional/three-dimensional image recognition program, and the two-dimensional/three-dimensional image processing program, when executed by a processor, implements the steps of the method for recognizing an ultrasound image as described above.
In summary, the present embodiment provides an ultrasound image identification method, a terminal device, and a storage medium, where the method obtains a probability mapping feature map corresponding to an ultrasound image to be identified; determining and acquiring a plurality of candidate feature points corresponding to the ultrasonic image based on the probability mapping feature map; and updating the probability mapping characteristic map based on the candidate characteristic points, and determining a target area corresponding to the ultrasonic image based on the updated probability mapping characteristic map. According to the method, the probability mapping characteristic map of the ultrasonic image is obtained, the candidate characteristic points are determined based on the probability mapping characteristic map, and the candidate characteristic points are corrected to update the probability mapping characteristic map, so that the accuracy of the target area determined based on the probability mapping characteristic map can be improved, and a doctor can better observe and analyze the development condition of ovarian follicles.
Of course, it will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by a computer program instructing relevant hardware (such as a processor, a controller, etc.), and the program may be stored in a computer readable storage medium, and when executed, the program may include the processes of the above method embodiments. The storage medium may be a memory, a magnetic disk, an optical disk, etc.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (10)

1. A method for identifying an ultrasound image, the method comprising:
acquiring a probability mapping characteristic map corresponding to an ultrasonic image to be identified;
determining and acquiring a plurality of candidate feature points corresponding to the ultrasonic image based on the probability mapping feature map;
updating the probability mapping feature map based on the candidate feature points, and determining a target region corresponding to the ultrasound image based on the updated probability mapping feature map, wherein the target region includes an ovary and/or a follicle.
2. The method for identifying an ultrasound image according to claim 1, wherein the ultrasound image is a two-dimensional/three-dimensional ultrasound image, and the ultrasound image includes follicles and/or ovaries.
3. The method for identifying an ultrasound image according to claim 1, wherein the obtaining of the probability mapping feature map corresponding to the ultrasound image to be identified specifically includes:
inputting the ultrasonic image into a trained segmentation network model, and determining a probability mapping characteristic map corresponding to the ultrasonic image through the segmentation network model.
4. The method for identifying an ultrasound image according to claim 1, wherein the determining to obtain a plurality of candidate feature points corresponding to the ultrasound image based on the probability mapping feature map specifically includes:
obtaining the confidence of the object type corresponding to each pixel point in the probability mapping characteristic diagram;
and determining a plurality of candidate feature points corresponding to the ultrasonic image based on the confidence of the object type corresponding to each pixel point.
5. The method for identifying an ultrasound image according to claim 4, wherein a difference between the confidence of the object class corresponding to each of the plurality of candidate feature points and the preset confidence satisfies a preset condition.
6. The method according to claim 1, wherein the updating the probability mapping feature map based on the candidate feature points specifically comprises:
acquiring a characteristic map corresponding to the ultrasonic image, wherein the characteristic map comprises image detail information of the ultrasonic image;
determining feature vectors corresponding to the candidate feature points based on the feature map, and determining probability vectors corresponding to the candidate feature points based on the rough probability feature map;
and updating the probability mapping characteristic graph based on the characteristic vector and the probability vector corresponding to each candidate characteristic point.
7. The method for identifying an ultrasound image according to claim 6, wherein the updating the probability map feature map based on the feature vector and the probability vector specifically includes:
determining fusion vectors corresponding to the candidate feature points based on the feature vectors and the probability vectors corresponding to the candidate feature points;
determining a target probability vector corresponding to each candidate feature point based on an iterative subdivision algorithm and a fusion vector corresponding to each candidate feature point;
and updating the probability vectors corresponding to the candidate characteristic points by adopting the target probability vectors corresponding to the candidate characteristic points so as to update the probability mapping characteristic map.
8. The method of claim 6, wherein the image size of the feature map is equal to the image size of the probability map feature map.
9. A terminal device, characterized in that the terminal device comprises: memory, a processor and an ultrasound image identification program stored on the memory and executable on the processor, the ultrasound image identification program, when executed by the processor, implementing the steps of the method of ultrasound image identification according to any of claims 1-8.
10. A computer-readable medium, in which the storage medium stores an ultrasound image identification program, which when executed by a processor implements the steps of the method for identifying an ultrasound image according to any one of claims 1 to 8.
CN202011073633.7A 2020-10-09 2020-10-09 Ultrasonic image identification method, terminal equipment and storage medium Pending CN112184683A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011073633.7A CN112184683A (en) 2020-10-09 2020-10-09 Ultrasonic image identification method, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011073633.7A CN112184683A (en) 2020-10-09 2020-10-09 Ultrasonic image identification method, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112184683A true CN112184683A (en) 2021-01-05

Family

ID=73948643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011073633.7A Pending CN112184683A (en) 2020-10-09 2020-10-09 Ultrasonic image identification method, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112184683A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784782A (en) * 2021-01-28 2021-05-11 上海理工大学 Three-dimensional object identification method based on multi-view double-attention network
WO2023077816A1 (en) * 2021-11-03 2023-05-11 中国华能集团清洁能源技术研究院有限公司 Boundary-optimized remote sensing image semantic segmentation method and apparatus, and device and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102438529A (en) * 2008-12-22 2012-05-02 美的派特恩公司 Method and system of automated detection of lesions in medical images
CN109308488A (en) * 2018-08-30 2019-02-05 深圳大学 Breast ultrasound image processing apparatus, method, computer equipment and storage medium
CN110322399A (en) * 2019-07-05 2019-10-11 深圳开立生物医疗科技股份有限公司 A kind of ultrasound image method of adjustment, system, equipment and computer storage medium
CN110570350A (en) * 2019-09-11 2019-12-13 深圳开立生物医疗科技股份有限公司 two-dimensional follicle detection method and device, ultrasonic equipment and readable storage medium
CN110753517A (en) * 2017-05-11 2020-02-04 韦拉索恩股份有限公司 Ultrasound scanning based on probability mapping
US20200205785A1 (en) * 2018-12-27 2020-07-02 Samsung Medison Co., Ltd. Ultrasound diagnosis apparatus and method of operating the same

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102438529A (en) * 2008-12-22 2012-05-02 美的派特恩公司 Method and system of automated detection of lesions in medical images
CN103854028A (en) * 2008-12-22 2014-06-11 赛利恩影像股份有限公司 Method and system of automated detection of lesions in medical images
CN110753517A (en) * 2017-05-11 2020-02-04 韦拉索恩股份有限公司 Ultrasound scanning based on probability mapping
CN109308488A (en) * 2018-08-30 2019-02-05 深圳大学 Breast ultrasound image processing apparatus, method, computer equipment and storage medium
US20200205785A1 (en) * 2018-12-27 2020-07-02 Samsung Medison Co., Ltd. Ultrasound diagnosis apparatus and method of operating the same
CN110322399A (en) * 2019-07-05 2019-10-11 深圳开立生物医疗科技股份有限公司 A kind of ultrasound image method of adjustment, system, equipment and computer storage medium
CN110570350A (en) * 2019-09-11 2019-12-13 深圳开立生物医疗科技股份有限公司 two-dimensional follicle detection method and device, ultrasonic equipment and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784782A (en) * 2021-01-28 2021-05-11 上海理工大学 Three-dimensional object identification method based on multi-view double-attention network
WO2023077816A1 (en) * 2021-11-03 2023-05-11 中国华能集团清洁能源技术研究院有限公司 Boundary-optimized remote sensing image semantic segmentation method and apparatus, and device and medium

Similar Documents

Publication Publication Date Title
CN111369576B (en) Training method of image segmentation model, image segmentation method, device and equipment
JP7026826B2 (en) Image processing methods, electronic devices and storage media
US20140314299A1 (en) System and Method for Multiplexed Biomarker Quantitation Using Single Cell Segmentation on Sequentially Stained Tissue
CN109389129A (en) A kind of image processing method, electronic equipment and storage medium
CN112164082A (en) Method for segmenting multi-modal MR brain image based on 3D convolutional neural network
CN113763314A (en) System and method for image segmentation and classification using depth-reduced convolutional neural networks
CN110889437B (en) Image processing method and device, electronic equipment and storage medium
CN112184683A (en) Ultrasonic image identification method, terminal equipment and storage medium
JP2020166809A (en) System, apparatus, and learning method for training models
CN113936011A (en) CT image lung lobe image segmentation system based on attention mechanism
Deshpande et al. Improved Otsu and Kapur approach for white blood cells segmentation based on LebTLBO optimization for the detection of Leukemia.
CN114494215A (en) Transformer-based thyroid nodule detection method
CN114758137A (en) Ultrasonic image segmentation method and device and computer readable storage medium
Tran et al. Fully convolutional neural network with attention gate and fuzzy active contour model for skin lesion segmentation
CN111199228A (en) License plate positioning method and device
CN116721289A (en) Cervical OCT image classification method and system based on self-supervision cluster contrast learning
CN116433704A (en) Cell nucleus segmentation method based on central point and related equipment
CN112801238B (en) Image classification method and device, electronic equipment and storage medium
CN112862786B (en) CTA image data processing method, device and storage medium
US20220222816A1 (en) Medical image analysis system and method for identification of lesions
CN114004795A (en) Breast nodule segmentation method and related device
Liu et al. Automatic Lung Parenchyma Segmentation of CT Images Based on Matrix Grey Incidence.
Tarando et al. Cascade of convolutional neural networks for lung texture classification: overcoming ontological overlapping
CN112862785A (en) CTA image data identification method, device and storage medium
Luisa Durán et al. A perceptual similarity method by pairwise comparison in a medical image case

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination