CN116433976A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN116433976A
CN116433976A CN202310403275.9A CN202310403275A CN116433976A CN 116433976 A CN116433976 A CN 116433976A CN 202310403275 A CN202310403275 A CN 202310403275A CN 116433976 A CN116433976 A CN 116433976A
Authority
CN
China
Prior art keywords
images
image
segmented
target object
brain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310403275.9A
Other languages
Chinese (zh)
Inventor
彭芸
李焕杰
胡迪
樊鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Beijing Childrens Hospital
Original Assignee
Dalian University of Technology
Beijing Childrens Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology, Beijing Childrens Hospital filed Critical Dalian University of Technology
Priority to CN202310403275.9A priority Critical patent/CN116433976A/en
Publication of CN116433976A publication Critical patent/CN116433976A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Abstract

The embodiment of the invention provides an image processing method, an image processing device and a storage medium, wherein the method comprises the following steps: and acquiring a plurality of images containing the target object, and carrying out registration processing on the plurality of images to obtain a plurality of registration images. And dividing the target object in the registration images according to the first template image corresponding to the target reference object to obtain a plurality of divided images, wherein the divided images comprise a plurality of divided areas. And extracting the characteristics of each segmented region in the plurality of segmented images to obtain a plurality of image characteristics corresponding to each segmented region in the plurality of segmented images. And carrying out fusion processing on a plurality of image features corresponding to the same segmentation region in the plurality of segmented images to obtain a plurality of fused image features corresponding to each segmentation region. According to the multiple fused image features, the identification result corresponding to the target object is determined, and the quality and effect of image processing are improved.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present invention relates to the field of artificial intelligence, and in particular, to an image processing method, apparatus, device, and storage medium.
Background
The 21 st century is an information-filled era, and images are taken as visual bases of the human perception world, and are important means for human to acquire information, express information and transmit information. The digital image processing, i.e. the technology of processing, analyzing and understanding the images by a computer to achieve the required result, is mainly applied to the fields of face recognition, image reconstruction, machine vision, medical imaging and the like.
During image processing, feature extraction of an image will directly affect the processing result of the image. The prior image feature extraction technology has higher requirements on the resolution ratio and the signal-to-noise ratio of the image to be processed. However, in practical applications, images with relatively high signal-to-noise ratio and resolution cannot be directly obtained in many application scenes, so that the processing result of the images is affected.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image processing method, apparatus, device, and storage medium, which are used to improve quality and effect of image processing, so that a recognition result of a target object is more accurate.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring a plurality of images containing a target object;
Registering the plurality of images to obtain a plurality of registered images;
dividing a target object in the registration images according to a first template image corresponding to the target reference object to obtain a plurality of divided images, wherein the target reference object is the same as the target object in type and different in shape, the first template image comprises a division result of the target reference object, and the divided images comprise a plurality of division areas;
extracting features of each segmented region in the plurality of segmented images to obtain a plurality of image features corresponding to each segmented region in the plurality of segmented images;
performing fusion processing on a plurality of image features corresponding to the same segmentation region in the plurality of segmented images to obtain a plurality of fused image features corresponding to each segmentation region;
and determining the recognition result corresponding to the target object according to the plurality of fused image features.
In a second aspect, an embodiment of the present invention provides an image processing apparatus including:
the acquisition module is used for acquiring a plurality of images containing the target object;
the registration module is used for carrying out registration processing on the plurality of images so as to obtain a plurality of registration images;
The segmentation module is used for segmenting a target object in the registration images according to a first template image corresponding to the target reference object to obtain a plurality of segmented images, the target reference object is the same as the target object in type and different in shape, the first template image comprises a segmentation result of the target reference object, and the segmented images comprise a plurality of segmentation areas;
the feature extraction module is used for extracting features of each segmented region in the plurality of segmented images to obtain a plurality of image features corresponding to each segmented region in the plurality of segmented images;
the fusion module is used for carrying out fusion processing on a plurality of image features corresponding to the same segmentation region in the plurality of segmentation images to obtain a plurality of fused image features corresponding to each segmentation region;
and the determining module is used for determining the identification result corresponding to the target object according to the plurality of fused image features.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory, where the memory is configured to store one or more computer instructions, and the one or more computer instructions implement the image processing method in the first aspect when executed by the processor. The electronic device may also include a communication interface for communicating with other devices or communication networks.
In a fourth aspect, embodiments of the present invention provide a non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to at least implement the image processing method according to the first aspect.
According to the image processing method provided by the embodiment of the invention, a plurality of images containing the target object are firstly obtained, and then the images are subjected to registration processing to obtain a plurality of registered images, so that the target object in the plurality of registered images has the same spatial position. And then, directly dividing the registration images according to the dividing result of the target reference object in the first template image so as to obtain a plurality of divided images. The target reference object and the target object are the same in type and different in shape, and the segmented image comprises a plurality of segmented areas. And then, extracting the characteristics of each segmented region in the plurality of segmented images to obtain a plurality of image characteristics corresponding to each segmented region in the plurality of segmented images, and carrying out fusion processing on the plurality of image characteristics of the same segmented region in the plurality of segmented images to obtain a plurality of fused image characteristics. And finally, determining the recognition result corresponding to the target object according to the plurality of fused image features.
In the above scheme, the feature extraction is performed on each of the multiple segmented regions in the multiple segmented images containing the target object, and the fusion processing is performed on the multiple image features of the same segmented region in the multiple segmented images, so as to obtain multiple fused image features, and the recognition result corresponding to the target object is determined according to the multiple fused image features. The method has the advantages that the influence of the image quality of a single image on the feature extraction can be effectively avoided by processing a plurality of images containing the target object, the feature extraction can be performed on each divided area, the extracted image features are more accurate, the quality and the effect of the image processing are ensured, the recognition result of the target object is more accurate, the difficulty in processing the images is reduced, and the image processing method can be widely applied to various application scenes.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention;
fig. 2 is a flowchart of performing registration processing on a plurality of images to obtain a plurality of registered images according to an embodiment of the present invention;
fig. 3 is a flowchart of extracting features of each of the divided regions in the plurality of divided images to obtain a plurality of image features corresponding to each of the divided regions in the plurality of divided images according to the present embodiment;
fig. 4 is a flowchart of an image processing method according to the present embodiment;
fig. 5 is a schematic diagram of an image processing method applied in a medical scene according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device corresponding to the image processing apparatus provided in the embodiment shown in fig. 6.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, the "plurality" generally includes at least two, but does not exclude the case of at least one.
It should be understood that the term "and/or" as used herein is merely one relationship describing the association of the associated objects, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to an identification", depending on the context. Similarly, the phrase "if determined" or "if identified (stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when identified (stated condition or event)" or "in response to an identification (stated condition or event), depending on the context.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a product or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such product or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a commodity or system comprising such elements.
Some embodiments of the present invention are described in detail below with reference to the accompanying drawings. In the case where there is no conflict between the embodiments, the following embodiments and features in the embodiments may be combined with each other. In addition, the sequence of steps in the method embodiments described below is only an example and is not strictly limited.
The image processing technology is a technology for processing image information by a computer, and the image information is processed to meet the requirements of human visual psychology and practical application. Image processing mainly includes image digitizing, image enhancement and restoration, image data encoding, image segmentation, image recognition, and the like. Image recognition is to extract features of an image, classify the image according to geometric and texture features of the image, and perform structural analysis on the whole image. The image is typically preprocessed prior to recognition, including noise and interference filtering, contrast enhancement, edge enhancement, geometric correction, and the like. The application range of image recognition is extremely wide, such as industrial automatic control systems, fingerprint recognition systems, cancer cell recognition in medicine, functional structure recognition and the like.
However, in the process of extracting features from an image and identifying the image, the existing image identification technology has high requirements on resolution and signal to noise ratio of the image to be processed. However, in practical application, images with high signal to noise ratio and high resolution cannot be directly obtained in many application scenes, and after preprocessing is performed on low-quality images, operations such as image recognition and the like are performed on the processed images, but when the images are preprocessed, certain image characteristics of the images may be changed, so that the processing quality and effect of the images are affected.
Before describing the image processing method provided by the embodiments of the present invention in detail, an application scenario of image processing is schematically described:
in practical applications, the image processing method is widely applied, for example, in the fields of face recognition, image reconstruction, medical imaging and the like. Taking face recognition as an example, when recognizing a face, an image containing the face is acquired, and the face can be regarded as a target object. At this time, the image processing apparatus may perform image processing on the acquired image including the target object to obtain the recognition result corresponding to the target object.
Taking the medical image as an example, the medical image is processed, for example, the medical image may include various functional structures such as a human brain structure, a liver structure, a lung structure, and the like, which can be regarded as a target object. At this time, the image processing apparatus may process the acquired image including the target object to obtain the recognition result corresponding to the target object.
Based on the above description, fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention. The image processing method provided by the embodiment of the invention can be executed by an image processing device, and the processing device can be specifically an electronic device with data processing capability, such as a server and the like. As shown in fig. 1, the method comprises the steps of:
s101, acquiring a plurality of images containing a target object.
The image may be a face image, and may further include magnetic resonance imaging (Magnetic Resonance Imaging, abbreviated as MRI) of a human body, electronic computed tomography (Computed Tomography, abbreviated as CT) of any portion of the human body, and the like. Also, in order to secure image quality and reduce the influence of the image quality of a single image on the target object recognition result, images of a plurality of target objects may be acquired simultaneously. Wherein, the plurality of images may be a plurality of images of the object of interest acquired in a short period of time.
Because the target object in the image is blurred due to factors such as scanning angle in the process of acquiring the target object, in order to better process the image containing the target object, a plurality of identical images containing the target object can be directly acquired. Alternatively, the multiple images including the target object may be two-dimensional images, and if the target object is a three-dimensional object, the multiple images may also be multiple layers of images corresponding to the target object, so as to describe the complete three-dimensional target object through the multiple layers of images.
When the image is processed, a plurality of images containing the target object input by a user can be directly acquired, or the image containing the target object can be directly acquired through communication connection with data acquisition equipment, and can be communicated with different data acquisition equipment, so that the images containing the target object acquired by the data acquisition equipment are received through communication connection, and the plurality of images containing the target object are acquired. The data acquisition device may be a video camera, a superconducting magnetic resonance scanner, or the like.
S102, performing registration processing on the plurality of images to obtain a plurality of registration images.
In practice, when the image acquisition is performed on the target object, there may be a deviation in the position of the target object in the acquired multiple images, so, in order to facilitate the subsequent processing of the images, after the multiple images including the target object are acquired, the multiple images are subjected to registration processing, so as to obtain multiple registration images, so that the target objects in the multiple registration images have the same spatial position.
The image registration process refers to selecting one image from a plurality of images as a reference image (i.e., a first image), and aligning other images to the reference image. In practical applications, for example, the first image may be the first image acquired earliest among the multiple images, which is not limited to this, and may be any one of the multiple images. Assuming that three images, image F1, image F2, and image F3, are sequentially acquired, with image F1 as the first image, then image F2 and image F3 need to be aligned with image F1, respectively, to obtain registered images F2 'and F3'.
In an alternative embodiment, an image corresponding to the target reference object may be acquired, and then the image containing the target object is registered based on the image corresponding to the target reference object, so that the acquired registered image has a spatial position consistent with the target reference object, and the segmentation processing of the target object in the multiple registered images may be performed directly based on the segmentation result of the target object better. The target reference object is the same type as the target object, and has different shapes, that is, the target reference object is the same type as the target object and has quite similar shape, and the reference object with similar shape to the target object can be selected as the target reference object. For example, the target subject may be a baby brain structure that is the 6 th month old, and the target reference subject may be a standard baby brain structure that is the 24 th month old. Wherein, both are infant brain structures, namely types are the same, but the shapes of brain structures are different in months.
The template image database can comprise two-dimensional brain template images corresponding to different ages, and any brain template can be selected as an image corresponding to the target reference object. Alternatively, a brain template image similar to the shape may be selected from a template image database according to the shape of the target object, and used as an image corresponding to the target reference object.
In addition, for the registration processing of the multiple images, specifically, the multiple images are spatially transformed, so that the target object in the transformed registered image and the target object in other images have the same spatial position, that is, the target objects in the two registered images can be spatially consistent. Alternatively, a conventional gray-scale statistics-based registration method and an image feature-based registration method may be used, and a deep learning algorithm-based registration may also be used to register the plurality of images.
S103, dividing the target object in the registration images according to a first template image corresponding to the target reference object to obtain a plurality of divided images, wherein the target reference object is the same as the target object in type and different in shape, the first template image comprises a division result of the target reference object, and the divided images comprise a plurality of division areas.
In order to improve accuracy of the segmentation result of the registration image, when the segmentation processing is performed on the multiple registration images, a first template image corresponding to the target reference object can be obtained, and the target object in the multiple registration images is segmented based on the segmentation result of the target reference object in the first template image, so as to obtain multiple segmented images. The segmented image includes a plurality of segmented regions, which may be regions that are important to be focused on in recognition of the target object, or several important regions of the target object. In practical application, the number of the segmentation areas can be determined according to the characteristics of the target object to be processed, and the registration image is segmented. For example, in the case of infant brain structures, the important brain regions may be divided into 20, and then when the infant brain structures in the plurality of registration images are divided, 20 divided regions may be divided, that is, 20 brain regions may be included in the finally obtained divided image.
Alternatively, the first template image may be a three-dimensional image. In practical application, the target object may be a three-dimensional object, so when the registration image including the target object is subjected to segmentation processing, a three-dimensional template image of the target reference object is obtained, and when the registration image is subjected to segmentation processing based on the segmentation result of the target reference object in the three-dimensional target image, a more accurate segmentation result can be obtained. For example, the target object is a brain structure of a baby with a size of 6 months, the target reference object may be a brain structure of a baby with a size of 24 months, the first template image may be a three-dimensional brain segmentation map of a baby with a size of 24 months, and the registration image including the target object is segmented based on the three-dimensional brain segmentation map to obtain a brain segmentation map corresponding to the target object.
In addition, for the acquisition of the first template image, alternatively, the image processing apparatus may randomly select an image from the template image database that has been established as the template image. An image similar to the shape may be selected from a template image database according to the shape of the target object, and used as the first template image. In the above example, the template image database may include brain segmentation maps corresponding to different ages, and then any one of the brain segmentation maps may be selected as the first template image. Since the infant brain images of different months have different shapes, in order to ensure the accuracy of image segmentation, an infant brain segmentation map similar to the month age of the target object can be selected from the template image database as the first template image.
And S104, extracting the characteristics of each segmented region in the plurality of segmented images to obtain a plurality of image characteristics corresponding to each segmented region in the plurality of segmented images.
In order to improve the accuracy of image feature extraction and accurately identify a target object according to the extracted image features, therefore, when the feature extraction is carried out, each divided area in each divided image is taken as a unit, a plurality of image features are respectively extracted in each divided area, so that important areas in the target object can be respectively subjected to feature extraction, the extracted image features of each area can be respectively analyzed and processed in a targeted manner, and the quality of image processing can be improved. The image features may include intensity features and texture features, and the number of the extracted image features may be set according to actual requirements, which is not limited.
For example, the target object is a brain structure of a baby of the 6 th month, the plurality of images including the target object may be nuclear magnetic images of the brain structure of the baby of the 6 th month, and each nuclear magnetic image includes a plurality of layers of two-dimensional images, that is, the plurality of images including the target object herein may be a plurality of layers of two-dimensional images. The two-dimensional image is segmented to obtain a plurality of brain regions, image feature extraction is carried out on each brain region, and 47 image features can be extracted from each brain region.
There are various implementations of feature extraction for the segmented image. For example, a convolutional neural network may be used to extract image features of the segmented image, or a form of calculating intensity features and texture features of a plurality of segmented images may be used to extract a plurality of image features corresponding to each segmented region in the segmented image.
S105, fusing a plurality of image features corresponding to the same segmentation region in the plurality of segmented images to obtain a plurality of fused image features corresponding to each segmentation region.
In order to ensure the processing quality and effect of the images and enable the recognition result of the target object to be more accurate, after a plurality of image features corresponding to each divided area in the plurality of divided images are obtained, feature fusion processing is performed on the plurality of image features corresponding to the same divided area in the plurality of divided images, so that a plurality of fused image features corresponding to each divided area are obtained.
Specifically, for example, there are 18 divided images, each divided image includes 5 divided regions, namely, a divided region 1, a divided region 2, a divided region 3, a divided region 4 and a divided region 5, and each divided region extracts 3 image features, namely, an image feature a, an image feature B and an image feature C, so when feature fusion is performed, firstly, feature fusion is performed on the image feature a in the divided region 1 in the 18 divided images, that is, the image features a in the 18 divided regions 1 are fused, and a fused image feature a corresponding to the divided region 1 is obtained; then, carrying out feature fusion on the image features B in the segmentation area 1 in 18 segmentation images to obtain fused image features B corresponding to the segmentation area 1; and carrying out feature fusion on the image features C in the segmentation region 1 in 18 segmentation images to obtain fused image features C corresponding to the segmentation region 1, and analogically obtaining 3 fused image features corresponding to the segmentation region 2, 3 fused image features corresponding to the segmentation region 3, 3 fused image features corresponding to the segmentation region 4 and 3 fused image features corresponding to the segmentation region 5.
The fused image features integrate all the features of each segmented image, and through averaging the image features, the relevant information can be mutually supplemented, so that noise and redundancy can be removed, the fused image features can enhance the relevant features of the images, and the relevant information of a target object can be better expressed.
S106, determining the recognition result corresponding to the target object according to the plurality of fused image features.
Finally, the image processing device can further determine the identification result corresponding to the target object according to the plurality of fused image features. Specific information of the target object may be identified, for example, information identifying a face, specific information of a commodity, brain age corresponding to a brain structure, and the like. After carrying out feature fusion, the method respectively obtains 3 fused image features corresponding to the 5 segmented regions, and then determines the brain age corresponding to the target object according to the obtained 15 fused image features.
In addition, optionally, a plurality of fused image features may be input into a pre-trained image recognition model, so as to obtain a recognition result corresponding to the target object. Wherein the image recognition model is trained for determining a recognition result of the target object based on a plurality of image features of the image. The image recognition model can be generated by learning and training the neural network, namely, the neural network is learned and trained by utilizing a recognition result corresponding to a preset target object and a plurality of image features corresponding to the target object, so that the image recognition model can be obtained. After the image recognition model is established, the image recognition model may be used to analyze a plurality of image features of the target object, so that a recognition result corresponding to the target object may be obtained.
The method comprises the steps of obtaining a target object, analyzing and processing a plurality of fused image features of the target object through a trained image recognition model, and obtaining a recognition result corresponding to the target object, so that the accuracy and reliability of the recognition result of the target object are effectively ensured, and the stability and reliability of the method are further improved.
In the embodiment of the invention, the characteristic extraction is carried out on each divided region in a plurality of divided images containing the target object, the fusion processing is carried out on a plurality of image characteristics of the same divided region in the plurality of divided images, a plurality of fused image characteristics are obtained, and the identification result corresponding to the target object is determined according to the plurality of fused image characteristics. The influence of the image quality of a single image on the feature extraction can be effectively avoided by processing a plurality of images containing the target object, so that the quality and the effect of the image processing are ensured, the difficulty of processing the images is reduced, and the image processing method can be widely applied to various application scenes. And feature extraction is carried out on each segmented region, so that feature extraction can be carried out in a targeted manner, the extracted image features are more accurate, the image feature information of each image is fused, the recognition result of the target object is determined based on the fused image feature information, and the recognition result of the target object can be more accurate.
Alternatively, in practice, there are often cases where identification of target objects corresponding to different types of users is required, and the types of users may include the gender, identity, age, and the like of the users. Specifically, in a medical scenario, the user types may include middle-aged men, young women, and the like, and the target object may be a different part of the body, such as the brain, lungs, liver, and the like. Because the same target objects of different types of users have large difference, the target reference object of the type of users can be obtained by utilizing the images of a certain target object of the same type. When the latest image of the same type of user is obtained, the target reference object can be used as a standard to register and divide the image corresponding to the target object of the type of user, so that the quality and effect of image processing are improved, and the accuracy of the identification result of the target object can be further improved.
For example, the user type may be an infant, and the target object to which the user of the type corresponds may be a brain structure. Since infants are in brain development stage, the shape difference of the infant brain structures of different months is large, and for infants of the same month, the brain structures corresponding to each individual also have certain individual differences, therefore, the average brain structures corresponding to the infant brain structures of different months, namely, target reference objects, can be respectively generated according to a plurality of infant brain structure images of different months obtained by history, and the target reference objects are used as standards. When the latest infant brain structure image of a certain month is acquired, the target reference object corresponding to the month can be used as a standard to register and divide the brain structure in the latest infant brain structure image.
The above-mentioned determination of the target reference object as a standard and the registration and segmentation of the plurality of images to be processed including the target object based on the image corresponding to the target reference object may be processed in a manner provided in the following embodiments.
Fig. 2 is a flowchart of performing registration processing on a plurality of images to obtain a plurality of registered images, where, as shown in fig. 2, the method may include the following steps:
s201, acquiring reference images corresponding to a plurality of reference objects respectively, wherein the reference objects are the same as the target object in type and different in shape.
In order to improve accuracy of a segmentation result of each target object in the subsequent segmented image, when registration processing is performed on a plurality of images, registration processing may be performed according to a plurality of reference objects similar to the target object. Specifically, a plurality of reference objects are determined according to the type of the target object. And acquiring reference images corresponding to the multiple reference objects respectively. For example, the target object is a brain structure of a baby of month 6, then a plurality of brain structure images of a baby of month 6 can be obtained from the database, and the brain structures of a baby of month 6 can be determined as the reference object. Wherein the reference object is the same type and different shape from the target object. Because of individual differences, images of brain structures of infants in the same month are acquired, and the brain structures of the infants also have certain differences.
S202, calculating the reference image to obtain a target reference object and a second template image corresponding to the target reference object.
If any one of the reference objects is used as a standard for registration in the registration process, a registration result may be obtained by performing image registration on different reference objects, and in order to unify the spatial positions of the target objects to be processed, the same target reference object may be used for registration processing on the same type of target object. Based on the above, after the reference images of the respective reference objects are acquired, the reference objects in the respective reference images may be calculated to determine the target reference object and a second template image corresponding to the target object, and then the registration processing of the plurality of images is completed based on the target reference object in the second template image. The second template image may be a two-dimensional image or a three-dimensional image, and only needs to be consistent with the dimension of the acquired image containing the target object.
In specific application, because the brain structures corresponding to different infants in the same month have certain differences, in order to eliminate individual differences, the obtained brain structures can be unified into the same space coordinate, and then the reference images corresponding to the reference objects can be calculated so as to obtain a standard target reference object and a second template image corresponding to the target reference object. Then, each image containing the target object to be processed may be registered into the second template image of the target reference object such that each feature point on the target object in each image to be processed has the same spatial position.
For example, the target object is a brain structure of a baby in the 6 th month, 25 brain structure images of the baby in the 6 th month, which have similar image dimensions and similar brain structure shapes and sizes, are selected from the template database according to the type of the target object, the 25 brain structure images are processed to obtain an average image, and the average image is determined to be the brain template image of the baby in the 6 th month.
S203, registering the plurality of images according to the position information of the target reference object in the second template image by using a preset algorithm so as to obtain a plurality of registered images.
After a second template image corresponding to the target object is acquired, a preset algorithm is utilized to perform space transformation on the plurality of images based on the position information of the target reference object in the second template image, so that the target object in the plurality of registration images has the same position information. Alternatively, spatial transformation of the plurality of images may be achieved by rotation, translation, or the like, in particular. The preset algorithm may be an open source toolkit advanced standardization tool (Advanced Normalization Tools, simply called ANTS).
After a plurality of registration images are obtained, a first template image corresponding to the target reference object is obtained, wherein the first template image comprises a segmentation result of the target reference object. And then, dividing the target object in the multiple registration images according to the first template image corresponding to the target reference object so as to obtain multiple divided images. Because the multiple registration images are obtained after registration is performed based on the position information of the target reference object in the second template image, namely, the target object and the target reference object have the same position relationship, the registration images can be directly segmented according to the segmentation result of the target reference object in the first template image, the operations such as extraction and correction of characteristic points are not needed, and the segmentation processing of the target object in the registration images can be directly completed to obtain multiple segmentation areas, so that the image segmentation process can be simplified, and the accuracy of the segmentation result can be improved.
In the embodiment of the invention, the reference images are obtained by respectively corresponding reference images of a plurality of reference objects, the reference images are calculated, the target reference object and a second template image corresponding to the target reference object are obtained, the plurality of images are registered according to the position information of the target reference object in the second template image by utilizing a preset algorithm, so that a plurality of registration images are obtained, the target reference object which can be used as a registration standard is provided for the target object of the same type, the registration efficiency can be improved, the registration accuracy can be improved, the subsequent image segmentation can be directly carried out on the registration image based on the first template image corresponding to the target object, the segmentation process is simplified, and meanwhile, the segmentation result accuracy can be improved.
In the above embodiment, after registering and image segmentation is performed on the image including the target object, in order to improve accuracy of the recognition result corresponding to the target object, when performing image feature extraction, feature extraction may be performed on each of the segmented regions by using several important segmented regions corresponding to the target object as units. In an alternative embodiment, a specific implementation manner of acquiring the image corresponding to each divided area may include: acquiring a first template image, wherein the first template image comprises a plurality of segmentation areas; and determining a binary template image corresponding to each divided area, and multiplying the binary template image corresponding to each divided area by the registration image to be divided to obtain an image only containing one divided area. The binary template image (mask) is an image only containing 0 and 1, and can be used for multiplying other images to obtain an image only corresponding to a mask value 1 part. The mask value corresponding to the segmented region to be extracted may be set to 1, and the other regions may be set to 0, so as to obtain a binary template image corresponding to the segmented region.
Specifically, for example, the segmented image includes 5 segmented regions, and when feature extraction is performed on the 5 segmented regions, a binary template image corresponding to each of the 5 segmented regions to be extracted is first determined. Next, the binary template image corresponding to each of the 5 divided regions is multiplied by the divided image to obtain an image including only the divided region 1, an image including only the divided region 2, an image including only the divided region 3, an image including only the divided region 4, and an image including only the divided region 5, respectively. Then, image feature extraction is performed in an image including only the divided region 1 to obtain a plurality of image features corresponding to the divided region 1; image feature extraction is performed on an image including only the divided region 2 to obtain a plurality of image features corresponding to the divided region 2, and according to this method, a plurality of image features corresponding to the divided region 3, a plurality of image features corresponding to the divided region 4, and a plurality of image features corresponding to the divided region 5 are sequentially determined.
When the feature extraction is performed on each of the divided regions, the intensity feature and the texture feature corresponding to each of the divided regions may be extracted. Wherein, the histogram can be used for describing the intensity characteristic, and on the basis of the histogram characteristic, common statistics such as maximum value, minimum value, average value, kurtosis, skewness and the like can be calculated to describe the intensity characteristic; the texture features are global features, reflect visual features of homogeneity in the image, represent surface tissue structure arrangement properties of slow transformation or periodical change on the surface of an object, are usually quantitatively extracted by using first-order, second-order and higher-order statistical methods, and are qualitatively or quantitatively described by an image intensity discretization method. And feature values may be reduced in size and screened using an analysis of variance selection method (variance threshold), a univariate feature selection method (select k best), a minimum absolute shrinkage and selection operator (least absolute shrinkage and selection operator, LASSO), etc., to obtain more representative features.
In addition, for a specific extraction process of the plurality of image features corresponding to the respective divided regions, reference may be made to fig. 3.
Fig. 3 is a flowchart of feature extraction for each of the divided regions in the plurality of divided images to obtain a plurality of image features corresponding to each of the divided regions in the plurality of divided images according to the present embodiment. As shown in fig. 3, the method specifically includes the following steps:
s301, respectively calculating first order statistic characteristics and texture characteristics corresponding to each segmented region in the multiple segmented images.
Wherein the first order statistic feature is a feature value obtained by calculation processing directly based on the pixel gradation distribution of the divided image including the target object. Specifically, according to each voxel data in the divided area, a gray level histogram is drawn, the frequency of each gray level occurrence is counted, and then the first order statistic characteristics corresponding to the divided area are calculated according to the gray level histogram. The texture features include features calculated based on gray level co-occurrence matrices, gray level run matrices, and the like.
A Gray-level Co-occurrence Matrix (GLCM) refers to a common method for describing texture by studying spatial correlation characteristics of Gray, and thus a Gray-level Co-occurrence Matrix is generally used to describe texture characteristics. And after the gray level co-occurrence matrix corresponding to each divided area is calculated, calculating the characteristics of the gray level co-occurrence matrix based on the gray level co-occurrence matrix. For example, the gray level co-occurrence matrix features may include gray level co-occurrence matrix energy, gray level co-occurrence matrix entropy, homogeneity 1, homogeneity 2, etc.
The Gray-Level Run-Length Matrix (GLRLM) is a Matrix formed by the lengths of Gray value runs. After the gray scale run matrix corresponding to each divided area is calculated, the characteristics of the gray scale run matrix are calculated based on the gray scale run matrix. Since the segmented image is a two-dimensional image, only the matrices of the four directions of the xy-plane are calculated, either GLCM or GLRLM.
S302, determining a plurality of image histology features corresponding to each segmented region in a plurality of segmented images based on the first order statistic features and the texture features.
And finally, determining a plurality of image histology features corresponding to each segmented region in the plurality of segmented images according to the first order statistic features and the texture features (the features obtained by the gray level co-occurrence matrix and the gray level run matrix calculation). For example, in an alternative embodiment, the segmented image is a segmented image of a brain structure, each brain structure image is divided into 20 brain regions, each brain region obtains 47 image histology features corresponding to the brain region by calculating first order statistic features and texture features, wherein the 47 image histology features comprise 14 intensity features and 33 texture features (22 gray level co-occurrence matrix features and 11 gray level travel matrix features).
In this embodiment, first order statistic features and texture features corresponding to each of the segmented regions in the multiple segmented images are calculated first, and then multiple image histology features corresponding to each of the segmented regions in the multiple segmented images are determined based on the first order statistic features and the texture features, so that multiple image features in each of the segmented regions corresponding to the target object can be extracted in a targeted and more accurate manner.
In the above embodiment, it has been mentioned that the image feature extraction processing is performed on each of the divided regions in the divided image, in practical application, the acquired image of the target object may be blurred due to reasons such as a scanning angle, so that the image feature in a single image may have a certain deviation in the recognition result of the acquired target object, so that in order to improve the accuracy of the recognition result of the target object, after the image features corresponding to each of the divided regions in each of the divided images are acquired, the image features corresponding to each of the divided regions are fused, so as to obtain the fused image features, and thus the acquired image features integrate all the features of each of the divided images, and can mutually supplement related information, remove noise and redundancy, so that the fused image features can enhance the related features of the image, and can better express the related information of the target object.
Therefore, in order to further improve accuracy of the recognition result of the target object based on the embodiment shown in fig. 3, after a plurality of image histology features corresponding to the same segmented region in the plurality of segmented images are obtained, mean value calculation is performed on the plurality of image histology features corresponding to the same segmented region in the plurality of segmented images, so as to obtain a plurality of fused image histology features corresponding to each segmented region. Finally, determining a feature matrix corresponding to the target object based on the plurality of fused image histology features; and determining the recognition result corresponding to the target object based on the feature matrix.
The determining manner of the feature matrix corresponding to the target object may include: and stacking a plurality of fused image features corresponding to each segmentation area to obtain a feature matrix. For example, if the target object corresponds to 20 segmented regions, and each segmented region corresponds to 47 fused image features, then the 47 fused image features in the 20 segmented regions are stacked to obtain a 20×47 feature matrix.
The image processing method in the above embodiment may be applied to various application scenarios, for example, in a medical scenario, when detecting a human brain structure, a 2D nuclear magnetic resonance scanner is generally used to perform nuclear magnetic resonance scanning on a brain structure of an object to be detected, so as to obtain 2D magnetic resonance image data of the object to be detected, where the 2D magnetic resonance image includes T1W imaging and T2W imaging, and each T1W imaging includes a multi-layer brain structure image, and each T2W imaging also includes a multi-layer brain structure image. Wherein, T1W images the difference in the relaxation of the protruding tissue T1 (longitudinal relaxation), and T2W images the difference in the relaxation of the protruding tissue T2 (transverse relaxation). Since infant brain structural features develop faster, in the early stages of myelination, T1WI is more valuable in evaluation of myelination within 1 year (within 6 months) because increased levels of oligodendrocyte cell membrane cholesterol and galactocerebroside during myelination lead to greater signal elevation in T1 WI. In the later stages of myelination, the decrease in free water content in mature white matter results in greater reduction of white matter in T2WI signal, and thus T2WI is more favorable for later evaluation of myelination. Because the contrast ratio of gray matter in the independent T1W and T2W is not high in practice, and the modes of attention of different ages are different, in consideration of operation uniformity and operation complexity reduction, in order to better reflect gray matter and white matter development differences in brain tissues at each stage, the T1W image data and the T2W image data can be fused, and then the combined images are processed to obtain the recognition results corresponding to the brain structures.
The method further comprises an image preprocessing procedure before acquiring the plurality of images containing the target object, the detailed processing procedure being referred to as shown in fig. 4. Fig. 4 is a flowchart of an image processing method according to the present embodiment. In order to improve the image processing quality and effect, the method further comprises:
s401, acquiring a magnetic resonance image containing a brain structure, wherein the magnetic resonance image comprises T1W imaging and T2W imaging, the T1W imaging comprises a plurality of first images, and the T2W imaging comprises a plurality of second images.
In practical applications, after a nuclear magnetic scan of a brain structure to be detected using a nuclear magnetic scanner, a magnetic resonance image containing the brain structure may be obtained, and the magnetic resonance image includes T1W imaging and T2W imaging. The gray level of the T1W image is mainly determined by the longitudinal relaxation speed of the tissue, and the gray level of the T2W image is mainly determined by the transverse relaxation speed of the tissue. Most of the existing nuclear magnetic scanners are 2D nuclear magnetic scanning, wherein the 2D nuclear magnetic scanning uses a layer as a unit, a certain layer is selectively excited by radio frequency pulse, and then the spatial positioning of the layer is performed by gradient coding, so that the imaging purpose is achieved, when the nuclear magnetic scanning is performed on a brain structure, the brain structure can be divided into multiple layers for scanning, the acquired T1W imaging comprises multiple layers of images, namely multiple first images, and the T2W imaging comprises multiple layers of images, namely multiple second images.
The magnetic resonance image of the brain structure input by the user can be directly received or can be directly acquired through the database before the magnetic resonance image corresponding to the brain structure is processed, the magnetic resonance image comprises a T1W imaging and a T2W imaging, the T1W imaging comprises a plurality of first images, and the T2W imaging comprises a plurality of second images.
S402, scalp removing treatment is carried out on the plurality of first images and the plurality of second images, and the plurality of processed first images and the plurality of processed second images are obtained.
The brain structures in the first image and the second image not only comprise important tissues such as brain gray matter, white matter and the like, but also comprise scalp, and the scalp possibly affects the identification of the subsequent brain structures, so that after a plurality of first images and a plurality of second images are acquired, scalp removing processing is performed on the plurality of first images and the plurality of second images by using dpabi, and a plurality of processed first images and a plurality of processed second images are acquired. Wherein dpabi is a brain imaging data processing and analyzing tool box.
The specific implementation manner of the peeling operation may include: firstly, calculating a gray level histogram of a first image/a second image, and determining a gray level threshold value, a maximum value and a minimum value of image gray levels for distinguishing brain tissues and non-brain tissues through the gray level histogram; then roughly estimating the gravity center of the brain tissue, and obtaining an initial brain tissue according to the gray values of the brain and the non-brain; and finally, constructing a brain initial surface in brain tissues through three-dimensional triangular patches, wherein each triangular patch is provided with tangential force and smooth force, and the initial surface is kept at a certain distance and smoothness under the drive of the two forces on the triangular patches until the brain surface is smooth and stable enough, and the segmentation is finished to obtain a processed first image/second image.
In an alternative embodiment, the scalp removing operation is performed on the first image by using dpabi to obtain a first image after scalp removing, and a binary template image mask of the brain is obtained by setting a non-zero part in the image to 1. Then, the other images are subjected to a head skin removing process by using the binary template image mask.
S403, processing the processed first images and the processed second images to obtain a plurality of first gray images corresponding to the processed first images and a plurality of second gray images corresponding to the processed second images.
First, the brain structure is subjected to positional correction, the first image and the second image are subjected to correction processing, and the second image is subjected to positional correction of tissues such as gray matter and white matter. And then carrying out tissue segmentation on the corrected first image and the corrected second image to respectively obtain gray matter probability maps corresponding to the two images. Binarizing the gray probability map to obtain a gray mask image, and multiplying the gray mask image with the corrected first image and the corrected second image to obtain a first gray image corresponding to the first image and a second gray image corresponding to the second image.
S404, generating a plurality of combined images including the brain structure based on the plurality of first gray images, the plurality of second gray images, the plurality of first images, and the plurality of second images.
Specifically, gray values in the first gray image and the second gray image are respectively extracted, and the gray values are sequenced from small to large to obtain a gray median value MG_T1 and a gray median value MG_T2 respectively. Then a new image st2w after gray scale scaling is calculated using the formula st2w=mg_t1/mg_t2×t2w. Wherein, MG_T1 represents the gray median value corresponding to the first gray image, MG_T2 represents the gray median value corresponding to the second gray image, and T2W represents the second image. And finally, processing the sT2w image and the first image to obtain a combined image. It may be expressed as equation st1w/st2w= (T1W- (M) G_T1 /M G_T2 )*T2W)/(T1W+(M G_T1 /M G_T2 ) T2W) generates a combined image containing brain structures. Wherein sT1W/sT2W represents a combined image, T1W represents a first image, and T2W represents a second image.
In the embodiment of the method, the T1W image and the T2W image are fused, and then the combined image is processed to obtain the identification result corresponding to the brain structure, so that the image is not switched back and forth in the image processing process based on the type of the target object, the operation complexity is reduced, and meanwhile, the gray matter and white matter development differences in brain tissues at each stage are better reflected by the combined image, so that the characteristics of each tissue structure of the target object can be better reflected by the combined image.
On the basis of the above embodiment, for ease of understanding, the specific implementation procedure of the image processing method provided above is exemplarily described with reference to fig. 5, in which an image of a brain structure of an infant is processed in a medical scenario, and an example is taken to determine a brain age corresponding to the brain structure of the infant. The image containing the target object may be a brain structure image of a baby of the 6 th month, the first template image corresponding to the target reference object may be a brain segmentation image of a baby of the 24 th month, the second template image corresponding to the target reference object may be a brain template image of a baby of the 24 th month, and the image containing the target object is processed by using the image corresponding to the target reference object, so as to obtain the brain age of the target object corresponding to the target object. The specific image processing process can comprise:
and step 1, acquiring a plurality of combined images of the brain structure to be identified.
Specifically, the method further comprises a data preprocessing process before acquiring the plurality of combined images of the brain structure to be identified. And acquiring a T1W image and a T2W image of the brain structure to be identified, wherein the T1W image and the T2W image respectively comprise stacked multi-layer brain structure images. With the T2W image as a reference, the T1W image was aligned with the T2W image using ANTsPy (biomedical image processing python library), and a corrected rT1W image was obtained. In clinical practice, the T1W image and the T2W image are generally obtained by scanning in one detection, and in the registration process, nonlinear transformation such as rotation is generally performed to eliminate possible head movements in the scanning process, so that important tissues in brain structures of the two images are corresponding to each other, and therefore, the T2W image is taken as a reference or the T1W image is taken as a reference.
Then, using spm12 to divide the T2W image and the rT1W image into a gray matter probability map obtained by dividing the T2W image and the rT1W image, and correcting the T2W image and the rT1W image in a non-uniformity manner to obtain a corrected mT2W image and a corrected mrT W image. Wherein spm12 is a tool box for preprocessing nuclear magnetic data.
And binarizing a gray probability map obtained by dividing the T2W image, wherein the part with the numerical value larger than 0.5 is set at 1, the rest part is set at 0, a gray mask image is obtained, and then the gray mask image is multiplied with the mT2W image and the mrT W image respectively, so that a GM-mT2W image and a GM-mrT W image are obtained. Wherein the GM-mT2W image is a new image containing only grey matter moieties, and the GM-mrT1W image is a new image containing only grey matter moieties. And then extracting numerical values in the GM-mT2W image and the GM-mrT1W image, and sequencing from small to large to obtain a median MG_T2 of the GM-mT2W image and a median MG_T1 of the GM-mrT W image respectively. Then, a new image st2w after gray scale is calculated using the formula st2w=mg_t1/mg_t2×t2w (where T2W is the mT2W image after non-uniformity correction).
Finally, a combined image CI (i.e., a sT1W/T2W image) is calculated based on the mrT W image and the sT2W image. Specifically, the combined image CI can be calculated using the formula ci= (mrT w-sT2 w)/(mrT w+st2w). Wherein the combined image CI includes a plurality of combined images, i.e., a plurality of combined images determined to include the target object.
Optionally, the method may further include a de-skinning operation, specifically, applying a binary template image mask of the brain to the mrT W image, the mT2W image, and the sT1W/T2W image, and masking these images to obtain peeled bmrT1W image, bmT W image, and bsT1W/T2W image.
And 2, acquiring brain template images corresponding to the target reference object, and registering the plurality of combined images according to the brain template images by using a preset algorithm to obtain a plurality of registered combined images.
Because the brain structures of infants develop faster and the brain structures of each infant are different in shape due to individual differences and the like, when the combined image containing the target object is processed, the 2D brain template image of the infant with the 24 th month size is taken as a reference, and the registration processing can be carried out on a plurality of images. The 2D brain template image of the infant with the size of 24 months is an average image obtained by calculating according to a plurality of brain structure images of healthy infants with the size of 24 months.
Specifically, a home-made 2D brain template of the infant in the 24 th month is taken as a reference, and the BmrT1W image is registered on the brain template by using ANTsPy to obtain a wbmrT1W image and a conversion matrix. And (3) respectively carrying out registration processing on the bmT W image and the bsT W/T2W image according to the conversion matrix to obtain a registered wbmT2W image and a registered wbsT1W/T2W image.
And step 3, acquiring a brain segmentation map corresponding to the target reference object, and segmenting the brain structures in the registered multiple combined images according to the brain segmentation map to obtain multiple segmented images, wherein the segmented images comprise multiple target brain areas corresponding to the brain structures.
Specifically, a brain segmentation image of a baby with a 24 th month old is obtained, and a wbsT1W/T2W image is segmented according to the brain segmentation image of the baby with the 24 th month old to obtain 20 brain regions. For example, left/right corpus callosum, left/right inner capsule forelimb, left/right inner capsule hindlimb, left/right basal ganglia, left/right brain stem, left/right frontal lobe, left/right parietal lobe, left/right temporal lobe, left/right occipital lobe, left/right cerebellar hemisphere. If the wbsT1W/T2W image includes 18 brain structure images, the wbsT1W/T2W image is divided, which is equivalent to dividing the multi-layer brain structure image at the same time, and the brain structure in each brain structure image is divided into a plurality of brain regions.
And 4, extracting the characteristics of each target brain region in the plurality of segmented images to obtain a plurality of image characteristics corresponding to each target brain region in the plurality of segmented images.
Specifically, with each brain structure image in the 2DwbsT1W/T2W image as a unit, texture features including a gray level co-occurrence matrix and a gray level run matrix corresponding to a plurality of brain regions in each brain structure image are calculated respectively. And calculating first order statistic characteristics of intensity characteristics corresponding to a plurality of brain regions in the multi-layer brain structure image by taking the single brain region as a whole. Then, 47 image group image features corresponding to each of the plurality of brain regions are determined based on the first order statistic feature and texture features including the gray level co-occurrence matrix and the gray level run matrix.
And 5, carrying out fusion processing on a plurality of image features corresponding to the same target brain region in the plurality of segmented images to obtain a plurality of fused image features corresponding to each target brain region.
Specifically, after feature extraction, calculating an average value corresponding to an image feature value in each brain region in the multi-layer brain structure image in the 2DwbsT1W/T2W image, namely obtaining average values corresponding to 33 texture image features, and determining the average value of each image feature as a fused image feature.
And 6, determining the recognition result corresponding to the target object according to the plurality of fused image features.
Specifically, 47 fused image features of 20 brain regions are spliced to obtain a feature matrix of 1×940. And inputting the feature matrix into a brain age identification model to determine the brain age corresponding to the target object. The brain age recognition model can be obtained based on a plurality of image features corresponding to brain structure images of healthy infants in each stage and brain age sample learning training corresponding to the brain structures.
In addition, in practical application, after determining the brain age corresponding to the target object, it can be determined whether the infant brain structure development is normal based on the brain age. If the infant brain structure is determined to be abnormal in development, a development track diagram corresponding to each image characteristic of different months of the infant can be obtained, and according to the development track diagram, the problem of the development of a specific brain region in the brain structure is determined. The development track diagram corresponding to each image feature is determined according to a plurality of image features of infants in each month, the feature matrix of each infant is unfolded and stacked with the feature matrix of other infants to obtain a target feature matrix, and the development track diagram corresponding to each image feature is determined according to the target feature matrix.
An image processing apparatus of one or more embodiments of the present invention will be described in detail below. Those skilled in the art will appreciate that these image processing devices may be configured using commercially available hardware components through the steps taught by the present solution.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention, as shown in fig. 6, the apparatus includes: an acquisition module 11, a registration module 12, a segmentation module 13, a feature extraction module 14, a fusion module 15 and a determination module 16.
An acquisition module 11 is configured to acquire a plurality of images including the target object.
And the registration module 12 is used for carrying out registration processing on the plurality of images so as to obtain a plurality of registration images.
The segmentation module 13 is configured to segment a target object in the multiple registration images according to a first template image corresponding to a target reference object, so as to obtain multiple segmented images, where the target reference object is the same type as the target object and has a different shape, the first template image includes a segmentation result of the target reference object, and the segmented image includes multiple segmentation areas.
The feature extraction module 14 is configured to perform feature extraction on each of the segmented regions in the plurality of segmented images, and obtain a plurality of image features corresponding to each of the segmented regions in the plurality of segmented images.
And the fusion module 15 is configured to perform fusion processing on a plurality of image features corresponding to the same segmentation region in the plurality of segmented images, so as to obtain a plurality of fused image features corresponding to each segmentation region.
And the determining module 16 is configured to determine a recognition result corresponding to the target object according to the plurality of fused image features.
Optionally, the registration module 12 may specifically be configured to: acquiring reference images corresponding to a plurality of reference objects respectively, wherein the reference objects are the same as the target object in type and different in shape; calculating the reference image to obtain a target reference object and a second template image corresponding to the target reference object; and registering the plurality of images according to the position information of the target reference object in the second template image by using a preset algorithm so as to obtain a plurality of registered images.
Alternatively, the feature extraction module 14 may be specifically configured to: respectively calculating first order statistic characteristics and texture characteristics corresponding to each segmented region in a plurality of segmented images; and determining a plurality of image histology features corresponding to each segmented region in the plurality of segmented images based on the first order statistic features and the texture features.
Alternatively, the fusion module 15 may be specifically configured to: and respectively carrying out mean value calculation on a plurality of image histology characteristics corresponding to the same segmentation region in the plurality of segmented images so as to obtain a plurality of fused image histology characteristics corresponding to each segmentation region.
Alternatively, the determining module 16 may specifically be configured to: determining a feature matrix corresponding to the target object based on the plurality of fused image histology features; and determining a recognition result corresponding to the target object based on the feature matrix.
Optionally, the target object is a brain structure, and the device may further include a preprocessing module, specifically may be used to: acquiring a magnetic resonance image containing a brain structure, the magnetic resonance image comprising a T1W image and a T2W image, the T1W image comprising a plurality of first images, the T2W image comprising a plurality of second images; scalp removing treatment is carried out on the plurality of first images and the plurality of second images, and a plurality of processed first images and a plurality of processed second images are obtained; processing the processed first images and the processed second images to obtain a plurality of first gray images corresponding to the processed first images and a plurality of second gray images corresponding to the processed second images; a plurality of combined images including brain structures is generated based on the plurality of first gray matter images, the plurality of second gray matter images, the plurality of first images, and the plurality of second images. The registration module 12 may also be used in particular to: acquiring a brain template image corresponding to a target reference object; registering the plurality of combined images according to the brain template image by using a preset algorithm to obtain a plurality of registered combined images; the segmentation module 13 may in particular also be used for: acquiring a brain segmentation map corresponding to a target reference object; and dividing the brain structures in the registered multiple combined images according to the brain division map to obtain multiple divided images, wherein the divided images comprise multiple target brain areas corresponding to the brain structures.
The apparatus shown in fig. 6 may perform the method of the embodiment shown in fig. 1 to 5, and reference is made to the relevant description of the embodiment shown in fig. 1 to 5 for parts of this embodiment not described in detail. The implementation process and technical effects of this technical solution are described in the embodiments shown in fig. 1 to 5, and are not described herein.
The internal functions and structures of the image processing apparatus are described above, and in one possible design, the structure of the image dividing apparatus may be implemented as an electronic device, as shown in fig. 7, which may include: a processor 21 and a memory 22. Wherein the memory 22 is for storing a program for supporting the electronic device to execute the image processing method provided in the embodiments shown in fig. 1 to 7 described above, and the processor 21 is configured for executing the program stored in the memory 22.
The program comprises one or more computer instructions which, when executed by the processor 21, are capable of carrying out the steps of:
acquiring a plurality of images containing a target object;
registering the plurality of images to obtain a plurality of registered images;
dividing a target object in the registration images according to a first template image corresponding to the target reference object to obtain a plurality of divided images, wherein the target reference object is the same as the target object in type and different in shape, the first template image comprises a division result of the target reference object, and the divided images comprise a plurality of division areas;
Extracting features of each segmented region in the plurality of segmented images to obtain a plurality of image features corresponding to each segmented region in the plurality of segmented images;
performing fusion processing on a plurality of image features corresponding to the same segmentation region in the plurality of segmented images to obtain a plurality of fused image features corresponding to each segmentation region;
and determining the recognition result corresponding to the target object according to the plurality of fused image features.
Optionally, the processor 21 is further configured to perform all or part of the steps in the embodiments shown in fig. 1 to 5.
The structure of the electronic device may further include a communication interface 23, for the electronic device to communicate with other devices or a communication network.
In addition, an embodiment of the present invention provides a computer storage medium storing computer software instructions for the electronic device, which includes a program for executing the image processing method according to the embodiment of the method shown in fig. 1 to 5.
Embodiments of the present invention also provide a computer program product comprising computer program instructions which, when read and executed by a processor, perform the image processing method of the above-described method embodiments shown in fig. 1 to 5.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to image data for processing, stored image data, etc.) related to the present invention are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. An image processing method, comprising:
acquiring a plurality of images containing a target object;
Registering the plurality of images to obtain a plurality of registered images;
dividing a target object in the registration images according to a first template image corresponding to the target reference object to obtain a plurality of divided images, wherein the target reference object is the same as the target object in type and different in shape, the first template image comprises a division result of the target reference object, and the divided images comprise a plurality of division areas;
extracting features of each segmented region in the plurality of segmented images to obtain a plurality of image features corresponding to each segmented region in the plurality of segmented images;
performing fusion processing on a plurality of image features corresponding to the same segmentation region in the plurality of segmented images to obtain a plurality of fused image features corresponding to each segmentation region;
and determining the recognition result corresponding to the target object according to the plurality of fused image features.
2. The method of claim 1, wherein the registering the plurality of images to obtain a plurality of registered images comprises:
acquiring reference images corresponding to a plurality of reference objects respectively, wherein the reference objects are the same in type and different in shape from the target object;
Calculating the reference image to obtain a target reference object and a second template image corresponding to the target reference object;
and registering the plurality of images according to the position information of the target reference object in the second template image by using a preset algorithm so as to obtain a plurality of registered images.
3. The method of claim 1, wherein the feature extracting each of the plurality of segmented regions in the plurality of segmented images to obtain a plurality of image features corresponding to each of the plurality of segmented regions comprises:
respectively calculating first order statistic characteristics and texture characteristics corresponding to each segmented region in a plurality of segmented images;
and determining a plurality of image histology features corresponding to each segmented region in the plurality of segmented images based on the first order statistic features and the texture features.
4. The method of claim 3, wherein the fusing the plurality of image features corresponding to the same segmented region in the plurality of segmented images to obtain a plurality of fused image features corresponding to each segmented region includes:
and respectively carrying out mean value calculation on a plurality of image histology characteristics corresponding to the same segmentation region in the plurality of segmented images so as to obtain a plurality of fused image histology characteristics corresponding to each segmentation region.
5. The method of claim 4, wherein determining the recognition result corresponding to the target object according to the plurality of fused image features comprises:
determining a feature matrix corresponding to the target object based on the plurality of fused image histology features;
and determining a recognition result corresponding to the target object based on the feature matrix.
6. The method of claim 1, wherein the target object is a brain structure, and wherein prior to the acquiring the plurality of images comprising the target object, the method further comprises:
acquiring a magnetic resonance image containing a brain structure, the magnetic resonance image comprising a T1W image and a T2W image, the T1W image comprising a plurality of first images, the T2W image comprising a plurality of second images;
scalp removing treatment is carried out on the plurality of first images and the plurality of second images, and a plurality of processed first images and a plurality of processed second images are obtained;
processing the processed first images and the processed second images to obtain a plurality of first gray images corresponding to the processed first images and a plurality of second gray images corresponding to the processed second images;
A plurality of combined images including brain structures is generated based on the plurality of first gray matter images, the plurality of second gray matter images, the plurality of first images, and the plurality of second images.
7. The method of claim 6, wherein the registering the plurality of images to obtain a plurality of registered images comprises:
acquiring a brain template image corresponding to a target reference object;
registering the plurality of combined images according to the brain template image by using a preset algorithm to obtain a plurality of registered combined images;
the segmenting the target object in the multiple registration images according to the first template image corresponding to the target reference object to obtain multiple segmented images includes:
acquiring a brain segmentation map corresponding to a target reference object;
and dividing the brain structures in the registered multiple combined images according to the brain division map to obtain multiple divided images, wherein the divided images comprise multiple target brain areas corresponding to the brain structures.
8. An image processing apparatus, comprising:
the acquisition module is used for acquiring a plurality of images containing the target object;
The registration module is used for carrying out registration processing on the plurality of images so as to obtain a plurality of registration images;
the segmentation module is used for segmenting a target object in the registration images according to a first template image corresponding to the target reference object to obtain a plurality of segmented images, the target reference object is the same as the target object in type and different in shape, the first template image comprises a segmentation result of the target reference object, and the segmented images comprise a plurality of segmentation areas;
the feature extraction module is used for extracting features of each segmented region in the plurality of segmented images to obtain a plurality of image features corresponding to each segmented region in the plurality of segmented images;
the fusion module is used for carrying out fusion processing on a plurality of image features corresponding to the same segmentation region in the plurality of segmentation images to obtain a plurality of fused image features corresponding to each segmentation region;
and the determining module is used for determining the identification result corresponding to the target object according to the plurality of fused image features.
9. An electronic device, comprising: a memory, a processor; wherein the memory has stored thereon executable code which, when executed by the processor, causes the processor to perform the image processing method of any of claims 1 to 7.
10. A non-transitory machine-readable storage medium having stored thereon executable code which, when executed by a processor of an electronic device, causes the processor to perform the image processing method of any of claims 1 to 7.
CN202310403275.9A 2023-04-14 2023-04-14 Image processing method, device, equipment and storage medium Pending CN116433976A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310403275.9A CN116433976A (en) 2023-04-14 2023-04-14 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310403275.9A CN116433976A (en) 2023-04-14 2023-04-14 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116433976A true CN116433976A (en) 2023-07-14

Family

ID=87079362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310403275.9A Pending CN116433976A (en) 2023-04-14 2023-04-14 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116433976A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117314908A (en) * 2023-11-29 2023-12-29 四川省烟草公司凉山州公司 Flue-cured tobacco virus tracing method, medium and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117314908A (en) * 2023-11-29 2023-12-29 四川省烟草公司凉山州公司 Flue-cured tobacco virus tracing method, medium and system
CN117314908B (en) * 2023-11-29 2024-03-01 四川省烟草公司凉山州公司 Flue-cured tobacco virus tracing method, medium and system

Similar Documents

Publication Publication Date Title
EP3979198A1 (en) Image segmentation model training method and apparatus, computer device, and storage medium
CN112150428B (en) Medical image segmentation method based on deep learning
CN110310287B (en) Automatic organ-at-risk delineation method, equipment and storage medium based on neural network
CN110796613B (en) Automatic identification method and device for image artifacts
CN107909622B (en) Model generation method, medical imaging scanning planning method and medical imaging system
CN111105424A (en) Lymph node automatic delineation method and device
CN110956635A (en) Lung segment segmentation method, device, equipment and storage medium
CN111932492B (en) Medical image processing method and device and computer readable storage medium
Alagarsamy et al. A fully automated hybrid methodology using C uckoo‐based fuzzy clustering technique for magnetic resonance brain image segmentation
CN111784706B (en) Automatic identification method and system for primary tumor image of nasopharyngeal carcinoma
EP2401719B1 (en) Methods for segmenting images and detecting specific structures
CN110363760B (en) Computer system for recognizing medical images
Masood et al. Brain tumor localization and segmentation using mask RCNN.
CN113424222A (en) System and method for providing stroke lesion segmentation using a conditional generation countermeasure network
Ferreira et al. Automating in vivo cardiac diffusion tensor postprocessing with deep learning–based segmentation
CN114092450A (en) Real-time image segmentation method, system and device based on gastroscopy video
CN116433976A (en) Image processing method, device, equipment and storage medium
CN114332132A (en) Image segmentation method and device and computer equipment
CN113012086A (en) Cross-modal image synthesis method
EP0832468B1 (en) Apparatus for image enhancement, and related method
US11842491B2 (en) Novel, quantitative framework for the diagnostic, prognostic, and therapeutic evaluation of spinal cord diseases
US20230115927A1 (en) Systems and methods for plaque identification, plaque composition analysis, and plaque stability detection
Amiri et al. An automated MR image segmentation system using multi-layer perceptron neural network
CN111080769A (en) Three-dimensional deformation model generation method and system based on nuclear magnetic resonance data and electronic equipment
Tang et al. Automatic abdominal fat assessment in obese mice using a segmental shape model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination