EP1345154A1 - Méthode de codage de points d'image et méthode de traitement d'image destinée à la reconnaissance qualitative d'un objet reproduit à l'aide d'un ou de plusieurs pixels - Google Patents

Méthode de codage de points d'image et méthode de traitement d'image destinée à la reconnaissance qualitative d'un objet reproduit à l'aide d'un ou de plusieurs pixels Download PDF

Info

Publication number
EP1345154A1
EP1345154A1 EP02425141A EP02425141A EP1345154A1 EP 1345154 A1 EP1345154 A1 EP 1345154A1 EP 02425141 A EP02425141 A EP 02425141A EP 02425141 A EP02425141 A EP 02425141A EP 1345154 A1 EP1345154 A1 EP 1345154A1
Authority
EP
European Patent Office
Prior art keywords
image
pixels
pixel
voxels
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP02425141A
Other languages
German (de)
English (en)
Inventor
Paolo Massimo Buscema
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bracco Imaging SpA
Semeion
Original Assignee
Bracco Imaging SpA
Semeion
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bracco Imaging SpA, Semeion filed Critical Bracco Imaging SpA
Priority to EP02425141A priority Critical patent/EP1345154A1/fr
Priority to PCT/EP2003/002400 priority patent/WO2003077182A1/fr
Priority to KR10-2004-7014304A priority patent/KR20040102038A/ko
Priority to EP03711951A priority patent/EP1483721A1/fr
Priority to AU2003218712A priority patent/AU2003218712A1/en
Priority to US10/516,879 priority patent/US7672517B2/en
Priority to CNB038056739A priority patent/CN100470560C/zh
Priority to JP2003575324A priority patent/JP4303598B2/ja
Publication of EP1345154A1 publication Critical patent/EP1345154A1/fr
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection

Definitions

  • the invention first relates to a method for encoding pixels of digital or digitized images, aimed at making the information content of each pixel available to automatic image processing systems, particularly designed for image recognition with reference to the objects reproduced therein.
  • digital is intended to define an image obtained by imaging apparatuses whose output is an image in digital format, i.e. digital cameras, Nuclear Magnetic Resonance imaging apparatuses, ultrasound imaging apparatuses and other imaging apparatuses.
  • digitized images is related to images obtained by substantially analog systems, providing an analog image which is further scanned by means of devices known as scanners, regardless of whether the latter are hardware devices, i.e. devices for reading an image which is physically printed on a medium, or software devices, i.e. designed to sample an image, provided in the form of set of signals and to turn it into digital signals.
  • devices known as scanners regardless of whether the latter are hardware devices, i.e. devices for reading an image which is physically printed on a medium, or software devices, i.e. designed to sample an image, provided in the form of set of signals and to turn it into digital signals.
  • a digital image is composed of a set of image dots, the so-called pixels, which may have different brightness conditions, i.e. different gray scale tones and, in color images, different colors.
  • Each pixel of an image also has a well-defined position whereby the digital image may be represented by a two- or three-dimensional matrix of elements Pi,j, each corresponding to a predetermined pixel of the pixel set that forms the image, the element Pi,j being a variable which assumes the brightness and/or color value associated to the specific pixel.
  • the different pixel-associated brightness values are represented by a gray scale extending from black to white through several different intermediate gray levels, whose number may be user-defined, based on the capabilities of the digitized imaging apparatus and/or of the display device.
  • the discrete image unit element is generally referred to as a voxel and the three-dimensional matrix is composed of elements Vi,j,k.
  • a digital image has a unique equivalent in the form of a data matrix which forms a virtual image and, as such, has a structure that is potentially adapted for image processing by systems or methods which use algorithms, whether provided by software loaded in computers or by dedicated hardware for accomplishing specific functions on the image matrix.
  • each isolated pixel Pi,j or voxel Vi,j,k provides nothing but the simple indication of its brightness value, i.e. the gray scale value corresponding thereto, hence it has no meaning wherefrom image information may be extracted, and only acts as data for controlling the display device, which may be and actually is handled during the imaging process to adjust the general aspect of the image, e.g. contrast and/or brightness and/or specific color as defined based on user-selected functions, depending either on objective or subjective data.
  • the image obtained thereby derives from the relation of each image pixel with the surrounding pixels. Therefore, in order to allow image processing to not only adjust the individual pixels to improve the quality of the displayed image, it is necessary to define the relations between each pixel and the pixels around it. At present, no rule exists to determine such relations, except those defined on the basis of assumptions or presumptively defined rules, based on the specific characteristics of the objects reproduced by the image.
  • the invention is based on the problem of providing a method for encoding image pixels, which allows to account for the relations of each pixel with the pixels around it, substantially regardless of the peculiar characteristics of the object specifically reproduced in the image, i.e. a method that can be used to provide an image data set, particularly adapted for an image processing procedure aimed at recognizing at least some characteristics of the objects represented in the image, as well as of the shapes of these objects.
  • An additional object is to provide an encoding process as mentioned above, which is simple and requires neither complex processing steps, nor long processing times, and does not cause the hardware required to store the encoded data to be overloaded.
  • the invention achieves the above purposes by providing a method for encoding pixels of digital or digitized images, wherein each pixel of the pixel set which forms the image is uniquely identified with a vector whose components are given by the data of the pixel to be encoded and by the data of at least one or at least some or at least all of the pixels around the pixel to be encoded, which pixels are disposed within a predetermined subset of pixels included in the total set of pixels that form the image.
  • the components of the pixel identifying vector are determined by selecting, as pixels surrounding the pixel to be identified, all the pixels that are directly adjacent to said pixel to be encoded.
  • the components of a pixel identifying vector may be also extended to at least one or at least some or all of the pixels which surround the pixels directly adjacent to the pixel to be encoded.
  • the components of the identification vector corresponding to the pixel to be identified and to the surrounding pixels are arranged in such a manner as to reflect the arrangement of the pixels within the pixel matrix which forms the image, with reference to a predetermined pixel reading sequence, for forming said vector.
  • the components of the identification vector, corresponding to the pixel to be identified and to the surrounding pixels are arranged in such a manner as to correspond to the distance relation of said pixels with one another and with the pixel to be encoded, with reference to a predetermined pixel reading sequence, for forming said vector.
  • the components of the identification vector are arranged in such a manner that the pixel to be identified has a central position which corresponds to the one taken in the image pixel set, obviously as related to the surrounding pixels, which pixel set has been selected for determining the identification vector components.
  • the methods includes the generation of an identification vector for each pixel which forms the digital or digitized image.
  • the virtual image composed of a data matrix which corresponds to a set of virtual pixels, i.e. a set of data having the same position as that of real, actually displayed image pixels, is turned into a matrix in which each element has, at a certain pixel location, the identification vector therefor, which in turn has the numerical structure as defined above.
  • the identification vector includes components given by the data associated to a certain predetermined number of pixels surrounding the pixel to be identified, the latter is defined not only by the numerical value corresponding to the brightness thereof, but also by the numerical values that correspond to the brightness of the surrounding pixels, which were selected to form the identifying vector components.
  • the matrix of pixels, i.e. of brightness data associated to the pixels is changed into a set of vectors.
  • the pixel identifying vector may be also extended to other components, e.g. the values of the selected pixels and of the pixel to be identified at different instants of time. This is advantageous when, for instance, different digital or digitized images of the same frame, acquired at different instants, are available.
  • the vector will be associated to a succession of different sets of components, comprising the value of the pixel to be identified and the values of the selected pixels around it, each set being determined by an image acquired or referred to the same frame at different acquisition instants.
  • the component sets are ordered within the identification vector in a succession corresponding to the time sequence of capture thereof.
  • the encoding method of the invention besides allowing to identify each pixel based on its numerical value and on the relation of said pixel to a certain number of surrounding pixels, also extends this identification to the time variation of said pixel to be identified and to the time variations of the relations of said pixel to be identified to the selected surrounding pixels. Thanks to the encoding method according to the invention, a numerical description may be provided for each image pixel, even for sequences of images representing moving objects, any change caused by the movement of the object being contained in the identification vector.
  • the method as described above may be easily implemented both for two- and three-dimensional imaging. In the latter case, the number of components of the identification vector considerably increases, in a cubic progression, if all the pixels which form the increasingly distant shells around the pixel to be identified are to be accounted for.
  • the pixel selection pattern around the pixel to be identified and whose data shall form the components of the identifying vector may vary depending on actual needs.
  • the invention also relates to an image processing method, particularly aimed at recognizing objects and/or object shapes, in an image in which pixels are encoded into identification vectors.
  • the method for processing digital or digitized images includes the following steps:
  • the invention is not limited to said algorithm, but may address any type of algorithm for comparing the identification vectors of the image pixels to be processed with the teaching database, such as a discretizing algorithm which makes a dull comparison and decides whether the identification vector belongs to one or the other type of object or feature amongst the various possibilities.
  • the pixel encoding method according to this invention provides highly reliable and accurate results, i.e. higher than is currently expected.
  • the result provided by the expert processing system may be viewed by simply printing or displaying a list.
  • the processing result may be highlighted by associating a color to each type or quality and by representing the solution over the digital or digitized image with each pixel of the digitized image being assigned the color of the corresponding type or quality of the represented object, as determined by the expert processing system.
  • the teaching step based on either different images of the same frame at different times or on images of different frames or objects whose type or quality is one of the predetermined options, allows the expert processing system, particularly a so-called neural network to learn what the aspect of the identification vector should be for a particular object or a particular quality with the highest variance of this aspect.
  • the recognition of the reproduced objects and/or qualities is independent from the global processing of the image and that it is performed pixel after pixel with no reference to what the pixel set represents within the image.
  • the processing system is allowed to recognize more accurately and reliably whether an identification vector, hence a pixel, belongs to a certain type of object or to a certain quality.
  • Pixel-based processing allows to substantially unlink the recognition of a pixel identifying vector as belonging to a certain object type or quality from the imaged subject.
  • the image processing method provides other advantages.
  • a first additional advantage consists in that the list of object types or qualities may be modified anytime, i.e. restricted or extended, without affecting the previous teaching process, by simple integration in the teaching database for the processing system. It is also possible to restrict image processing to only recognize some of the types or qualities of the imaged object among all the qualities or types of the teaching database, without affecting any further extension thereof.
  • the database including the knowledge acquired by the system may be increased, thereby improving knowledge, expertise, hence reliability and accuracy of the processing system.
  • the same processing system may be used to accomplish different functions.
  • Yet another advantage provided by the recognition method of this invention consists in allowing to limit image definition during acquisition, thereby obtaining identical or even better results as regards the possibility to evaluate the acquired image thanks to a better and more accurate recognition allowed by the method, as compared with human eye potential.
  • This provides an important advantage, a lower resolution involving a reduced duration of imaging, e.g. by Nuclear Magnetic Resonance or by ultrasounds or other similar means. This not only allows to reduce the costs required for fast imaging and image reconstruction apparatuses, but also has positive implications, namely for the comfort of the patient, who does not have to keep still for very long times.
  • a particular application of the image recognition method of the invention consists in the automatic recognition of tissue types from the diagnostic images acquired by Nuclear Magnetic Resonance imaging, ultrasound imaging, radiography, etc.
  • the method includes the following steps:
  • the result is displayed as color assignments, where predetermined colors are assigned to the different types of object or to the different qualities of the pixels which have been found to belong to an object type and/or quality.
  • the method has no diagnostic function, but generates considerably reliable indications for the physician or for the technical personnel responsible for the evaluation of the acquired image. No direct treatment suggestion is provided, but simply an indication of a type of tissue which is highly likely to be found in the image. The actual and total certainty of the result for final diagnosis requires both the image to be read and interpreted by qualified personnel and other cross-checks to be performed by other diagnostic methods.
  • a diagnostic image like a radiographic plate, an ultrasound image or a Nuclear Magnetic Resonance image, particularly when the diseases reproduced in the image have a very small extension.
  • the instrument provided by this invention allows to reliably signal potential pathologic elements, while reducing the risk of misinterpretation, or preventing the same elements from being misinterpreted or even from not being seen by the physician or qualified personnel.
  • Figure 1 shows the inventive encoding method in a highly simplified manner, in the case of a two-dimensional digital or digitized image, i.e. consisting of a set of pixels (image unit elements).
  • the example shows a pixel of the pixel set, denoted as 5, which is designed to be encoded in such a manner as to make the information available for any type of treatment, particularly for processing.
  • the image may comprise any number of pixels and the steps of the method, as shown with reference to the pixel 5 of Figure 1 are executed for every pixel of the image.
  • the pixels around the pixel are used to form an identification vector for the pixel 5.
  • the surrounding pixels may be selected according to predetermined rules which may lead to different selections of surrounding pixels, as components of the identification vector, both as regards the number of the surrounding pixels to be selected as components of the identification vector and as regards the location of these surrounding pixels relative to the pixel to be encoded, in this case to the pixel 5.
  • One of the most obvious selections consists in using, as components of the identification vector for the pixel to be encoded, all the pixels directly adjacent to the pixel to be encoded, that is, in the notation referred to pixel 5 of Figure 1, the surrounding pixels 1, 2, 3, 4 and 6, 7, 8, 9.
  • the value represented by each pixel is given by a brightness value of the corresponding pixel, i.e. a gray value in a gray scale extending from white to black through a certain number of intermediate levels, which may have a different number of gray tones, depending on the quality of the digital image with respect to the color resolution of the imaging apparatuses.
  • each pixel may also have one variable for indicating the color to be assigned thereto.
  • Figure 1 shows the structure of the identification vector for a pixel with reference to pixel 5.
  • the vector comprises, in the same pixel indexing sequence within the pixel matrix, all the pixels which constitute the vector components, starting from pixel 1 and ending with pixel 9.
  • the pixel 5 located at the center of the pixel matrix appears to occupy a central place in the sequence of the identification vector components.
  • the identification vector for pixel 5 does not only contain gray-scale brightness, i.e. pixel aspect information about the pixel to be identified, but also brightness information about the pixels around it.
  • This vector structure is based on the acknowledgement that the content of an image is not recognizable based on the aspect of an individual pixel, but based on the relation between the aspect thereof and the aspect of the surrounding pixels.
  • each image dot is not important per se, unless it is evaluated with reference to the aspect of the surrounding dots or areas. Even from the visual point of view, what is shown in an image is recognized on the basis of a relative evaluation between the different areas of the image.
  • the selection of the surrounding pixels to create the identification vector for the pixel to be encoded is not governed by any specific rule.
  • identification vectors it is possible to increase the number of surrounding pixels to be accounted for to generate the identification vectors, by using, as vector components, at least some or all of the pixels of pixel rings surrounding the central pixel to be encoded, at increasing distances from the central pixel to be encoded.
  • the number of vector components increases drastically, and overloads the identification vector processing conditions. If the identification vector for pixel 5 is arranged to comprise, for instance, all the pixels that externally surround the illustrated 3x3 pixel matrix, the number of identification vector components increases from 9 to 25 components.
  • said additional pixels at a longer distance from the pixel to be encoded may be suitably weighted, possibly also in a different manner from each other, to attenuate the effect thereof on the identification vector.
  • Fig. 2 shows the situation of a three-dimensional image, in which the central pixel 14 is encoded by an identification vector which has, as components, the values of all the pixels directly surrounding it and subtending a 3x3x3 cube, whereby it includes 27 components.
  • the considerations proposed for the two-dimensional embodiment also apply to the three-dimensional embodiment.
  • the number of components increases in a cubic progression, from 27 pixels to 125 pixels, if all the pixels of a 5x5x5 cube are considered in the identification vector.
  • the pixel vector encoding method allows to also integrate the behavior through time of the pixel under examination in the identification vector, when a sequence of images of the same frame are available.
  • the sequence of images may be composed, for instance of frames of a motion picture or of individual images of the same frame as taken at successive instants.
  • An example of imaging of the same frame at successive instants consists in diagnostic ultrasound imaging of contrast agent perfusion.
  • the perfusion of contrast agents pushed by the flows of vascular circulation is imaged by injecting contrast agents in the anatomic part under examination at the instant Tc, and by subsequently imaging the same part at predetermined time intervals. Time variations of the image allow to check the presence of contrast agents, after a certain period from the injection instant.
  • These images may provide useful deductions and/or information to check the presence of vascular and/or tumor diseases.
  • the recognition of the reproduced object is not only based on the aspect thereof, but also on the time variation of said aspect. Therefore, the pixel vector encoding process aimed at including, in the identification vector for each pixel, all the data characterizing the quality or type of the object reproduced by an image pixel must especially account for the time variation of the encoded pixel.
  • the identification vector for a predetermined pixel contains a set of 9 components, relating to pixels 1 to 9 in the proper time sequence, for each instant whereat the corresponding image has been captured.
  • the embodiment has been developed with reference to the acquisition of a sequence of ultrasound images of the same anatomic part, performed after the injection of contrast agents.
  • the instant TC whereat contrast agents are injected is denoted by the arrow TC.
  • the encoding vector according to the example of Figure 3 would include, for all six images of the image time sequence, 162 components. If encoding is extended to a 5x5x5 pixel three-dimensional space, the components of the identification vector for the pixel will increase to 750.
  • inventive encoding method may be followed in a simple and fast manner, it allows to identify the characteristics of an image pixel through its value as well as in relation with its surrounding pixels, and also with reference to time variations of the pixel to be encoded and of the surrounding pixels.
  • this encoding method only accounts for pixels, and of a substantially restricted examination field, which is independent from the subject of the image and of the encoding purpose.
  • encoding times for an image or a sequence of images strictly depend on the image size, in terms of number of pixels.
  • the encoding method as disclosed above which includes the information about the aspect of a pixel with reference to the surrounding pixels and/or the time variations thereof, allows image processing methods which substantially require a recognition of the object type or of the quality of what is reproduced by the pixel, and allow to automatically recognize the type of object and/or the quality thereof by the same processing system or software.
  • Figure 4 is a block diagram of a method of processing digital or digitized images, operating on the basis of the previously described encoding method.
  • the processing method includes two steps: teaching the processing system and processing.
  • Processing is performed, per se, by an algorithm which is basically an algorithm for executing comparisons between a database that includes a certain number of identification vectors for pixels associated to the type of object or to the quality corresponding thereto and the identification vectors for pixels of an image to be processed.
  • the comparison algorithm assigns to each pixel encoding vector within the image to be processed, or to a predetermined portion thereof, the most appropriate or probable or the closest type of object or quality of the pixel identifying vector included in the database.
  • the processing algorithm may be a simple discriminating algorithm, for instance an LDA algorithm (Linear Discriminant) (S.R. Searle, 1987, Linear Models for unbalanced data, New York, John Wiley & Sons) or a more complex algorithm, e.g. neural networks.
  • LDA algorithm Linear Discriminant
  • the image processing procedure to be used is a typical application for neural networks, i.e. an application in which a very great number of typically simple operations is required, and which finds no exact numerical solution due to the considerable number of identical processing steps to be performed. In practice, a dull execution of the steps for comparing identification vectors for the image pixels to be processed, with the identification vectors for the pixels of the reference database, would require so lung computing times as to be unacceptable.
  • a number of neural networks might be used, for instance those known as: MetaGen1, MetaGen, MetanetAf, MetaBayes, MetanetBp, MetanetCm (M.Buscema(ed), 1998, SUM Special Issue on ANNs and Complex Social Systems, Volume 2, New York, Dekker, pp 439-461 and M.Buscema and Semeion Group ,1999, Artificial Neural Networks and Complex Social Systems [in Italian], Volume 1, Rome , Franco Angeli, pp 394-413 and M.Buscema, 2001, Shell to program Feed Forward and Recurrent Neural Networks and Artificial Organisms, Rome, Semeion Software n.12, ver 5.0), TasmSABp, TasmSASn (M.Buscema and Semeion Group ,1999, Artificial Neural Networks and Complex Social Systems [in Italian], Volume 1, Rome , Franco Angeli, pp.
  • the teaching step consists in generating a database of pixel identification vectors which are uniquely associated to the type of object or quality reproduced by pixels of digital or digitized images which are encoded as described above, and are interpreted on the basis of visual operations performed by qualified personnel.
  • the identification vector of each pixel is associated to the type of object or quality of what is reproduced by the pixel, a list of object types or qualities of interest having been previously defined, consistent with the typical subjects of the digital or digitized images used for teaching, hence for generating the knowledge database to be provided to the processing algorithm.
  • the knowledge database for teaching the processing algorithm is provided or allowed to be accessed by the processing algorithm, depending on the specific teaching mode of the selected processing algorithm.
  • a digital or digitized image of an image subject is encoded with the above described method, in a manner compatible with those used in the images designed to form the knowledge database and a list of object types or qualities is defined, among those included in the knowledge database of the processing algorithm.
  • the processing algorithm substantially compares the identification vectors of the individual pixels generated by the encoding process, and assigns the most probable object type or quality of the reproduced object, to each pixel.
  • the different indications of object types or qualities associated to each identification vector for image pixels are then displayed by printing lists and/or by differentially highlighting, e.g. by colors, the pixels of the image to be processed directly on the image.
  • the digitized or digital images may be two-dimensional or three-dimensional with reference to what has been described for the encoding method, or may consist each of a sequence of images of the same frame, as acquired at different instants.
  • Figure 4 shows the two teaching and processing steps in which 10 denotes a set of digital or digitized images, both individually and in the form of image sequences. 11 denotes the procedure of encoding each pixel of said images into the corresponding identification vector. 12 denotes the step of uniquely associating the object quality or type as reproduced by each pixel to the corresponding identification vector based on the list of predetermined object types or qualities 13 and 14 denotes the reference or teaching database for the image processing algorithm.
  • a digital or digitized image or a set of said images such as a sequence of images of the same frame, denoted as 18, is subjected to a step of pixel encoding into identification vectors, denoted as 19, and the identification vectors are provided to the processing algorithm 17, which is also supplied with a list of types or qualities specifically sought for and included in the list 13 wherewith the teaching database 14 was prepared to be accessed by the processing algorithm 17.
  • the processing algorithm assigns to each identification vector for the pixels of the image/s 18 an object type or a quality and the identification vectors are decoded in 20 into the corresponding pixel, the latter being assigned any pixel aspect changes uniquely related to the type associated thereto, for instance a color or the like.
  • the pixels marked thereby are displayed on the screen, e.g. over the original image/s and/or a list of the identification vectors for the pixels of the digital image/s to processed is printed, and/or the image displayed on the screen is printed.
  • the data provided from the algorithm may be used for further processing, based on the recognition of the object qualities or types reproduced by the individual pixels thanks to the processing algorithm.
  • Any further processing or handling of the data provided by the algorithm may be performed by the algorithm itself or by other types of algorithms, depending on the desired functions or handling purposes.
  • the above processing method may be used to simply recognize objects, or qualities or conditions or statuses of the objects reproduced by pixels.
  • This type of processing is advantageously used in the medical field, as an automatic support to reading and interpretation of diagnostic images, particularly radiographic images, ultrasound images, Nuclear Magnetic Resonance images, or the like.
  • the method of the invention may be used to recognize shapes or types of objects in images with the same subject and substantially the same frames, but being shot or acquired with different methods.
  • each of the images of the same subject and showing substantially the same frame may be processed with the processing method of the invention, whereupon the pixels of the different images, having substantially identical positions therein and being associated to the same object type or object quality are shown in overlaid positions, thereby providing an image which contains the details of the same subject, as imaged with the three methods.
  • This may be advantageous to integrate into a single image, details that may only be recognized and reproduced with some of the acquisition or imaging techniques or modes, as well as details that may be only recognized and imaged with other acquisition or imaging techniques.
  • the processing method may be used for image correction, e.g. to accurately correct defocused images.
  • image correction e.g. to accurately correct defocused images.
  • the inventive method may be used to generate a focused image, by identifying the pixels which reproduce unfocused borders and removing or modifying them to obtain the focused image.
  • all data obtained by different imaging techniques e.g. ultrasound, radiographic and MR imaging may be integrated into a single image.
  • Figures 5 to 13 show the results of an embodiment of the inventive method as applied to the medical field and to the purpose of supporting the diagnostic activity of the physician.
  • EXAMPLE 1 (Fig. 5) SUBJECT BREAST IMAGING METHOD NUCLEAR MAGNETIC RESONANCE PURPOSE RECOGNITION OF TISSUE TYPES TISSUE TYPES 1.
  • a teaching database for the image processing algorithm is generated to recognize two types of tissues, i.e. benign tumor and malignant tumor in the breast region.
  • a predetermined number of Nuclear Magnetic Resonance images of the breast region of patients who have been diagnosed a malignant breast tumor and of patients who have been diagnosed a benign breast tumor are pixel encoded according to the method described above.
  • the identification vectors for the pixels have, as components, all the surrounding pixels in a 3x3 pixel matrix, in which the pixel to be encoded is the central pixel (Fig. 1).
  • the identification vector for each pixel is assigned the type of tissue reproduced by the pixel in the image.
  • the teaching database for the image processing algorithm contains identification vectors of image pixels relating to two tissue types, i.e. malignant tumor tissues of the breast region and benign tumor tissues of the breast region.
  • a sequence of Nuclear Magnetic Resonance images of the breast region of different patients, which were not used for generating the teaching database are encoded as disclosed above with reference to Figure 1, and according to the pixel encoding method which is followed to encode the images used to create the teaching database for the processing algorithms.
  • An example of these images is shown in Figure 8.
  • the white ring denotes the presence of benign tumor tissue.
  • the identification vectors for the individual pixels are provided to the processing algorithm for the recognition of the tissue type reproduced thereby.
  • the algorithm assigns to the different identification vectors, hence to the corresponding pixels, the type of tissue represented thereby based on the teaching database.
  • the result thereof is displayed by appropriately and differentially coloring the pixels whereto the type of benign or malignant tumor tissue has been assigned.
  • Figure 9 shows a tissue type recognition result example referred to the image of Figure 8, in which the white outlined area had been recognized by visual analysis as representing the benign tumor tissue.
  • the black screened white zone represents the pixels whereto the processing algorithms assigned the type of benign tumor tissue.
  • the white encircled black zones denote the pixels whereto the processing algorithms assigned the type of malignant tumor tissue.
  • Figure 5 shows both in a data table and in a chart the prediction reliability results of tissue type recognition obtained by processing with the different neural networks as listed above.
  • the results obtained therefrom are expressed in terms of correct benign or malignant tumor tissue recognition percentage, of recognition sensitivity and of weighted and arithmetic correct recognition average, as well as absolute errors.
  • the chart only shows the two tissue type recognition percentages and the errors.
  • the method substantially provides support to diagnostic image reading and interpretation, aimed at better location and recognition of specific tissue types represented in the images.
  • diagnostic image reading and interpretation aimed at better location and recognition of specific tissue types represented in the images.
  • the difficulties in reading and interpreting diagnostic images, whether of the MRI and ultrasound or radiographic imaging types are self-evident from Figure 8.
  • the example 2 is similar to the example 1, an additional tissue type, i.e. normal tissue, being included in the recognition database.
  • an additional tissue type i.e. normal tissue
  • the encoded vectors for the pixels of the images are uniquely associated to one of the tissue types represented thereby, i.e. benign tumor tissue, malignant tumor tissue or normal tissue.
  • This additional type allows to count on a greater number of pixels and corresponding identification vectors having a sure meaning.
  • these pixel identifying vectors and the corresponding pixels have no meaning for the processing algorithm, whereas in this second example, the processing algorithm can assign an additional well-defined class or type of tissue.
  • the example 3 is similar to the above examples, but includes five tissue types, i.e.: benign tumor tissue, malignant tumor tissue, normal tissue, muscular tissue and image background.
  • the teaching database is generated as described above with reference to the previous examples and includes pixel identifying vectors, each being uniquely assigned one of the above five types, i.e. the one represented by the respective pixel.
  • tissue type recognition result per image pixel is shown in Figure 7 and for the following algorithms: PROCESSING ALGORITHM MetanetAf MetaBayes MetanetBp MetanetCm FF-Bm FF-Sn FF-Bp FF-Cm LDA
  • LDA is a discriminating algorithm.
  • the discriminating algorithm also provides unexpected results in relation to the capabilities thereof, when compared with normal conditions, although the results provided thereby are definitely lower than those obtained by neural networks.
  • the chart shows the errors for each different algorithm.
  • Fig. 10 shows an example of result visualization by differentiated pixel coloring, depending on the different types recognized therefor, and with reference to the example of Figure 8.
  • the muscular tissue, the background, the normal tissue and the benign tumor tissue are properly recognized.
  • Figures 11, 12 and 13 show an example of Nuclear Magnetic Resonance digital photographs of a breast region including a malignant tumor tissue, as highlighted by a white ring in Figure 11 and by a corresponding partial enlarged view in Fig. 11.
  • Fig. 13 shows the result obtained in terms of recognition of the tissue types reproduced by the image pixels, thanks to the method of the invention, with the help of a neural network as a processing algorithm.
  • the teaching database is the same as in the example 3, including all five tissue types.
  • Fig. 13 shows an example of recognition processing result visualization.
  • the described method is not necessarily limited to the type of frame. Thanks to the fact that the encoding method according to the invention allows to account for the relation between the encoded pixel and the surrounding pixels, the teaching database actually allows to identify and recognize the tissue type reproduced by a pixel of a digital image of the same anatomic region or possibly of different anatomic regions, regardless of the specific image frame.
  • the teaching database may dynamically grow by the addition of data gathered and confirmed through successive processing procedures.
  • the identification vector-tissue type pairs so formed may be themselves loaded in the teaching database which grows with the use of the method, thereby making the processing algorithm increasingly expert and reducing the indecision or error margin.
  • the recognition processing may be changed as regards the number of different tissue types to be recognized.
  • Image processing aimed at recognizing tissue types or qualities is also possible by a pixel vector encoding method, which accounts for time variations of the pixel reproducing a specific object, i.e. the type according to the encoding example of Figure 3. As disclosed above, this encoding type allows to encode pixels of image sequences.
  • tissue recognition method may be provided for moving subjects, such as in the case of ultrasound imaging or similar, of the heart.
  • a similar application field for a combination of the processing method for tissue type recognition in digital or digitized image sequences with a pixel encoding method for the pixels of said sequence images, respectively by an identification vector including, for each pixel, the values of the pixel to be encoded and of the pixels around it, of each image of the image sequence, consists in the recognition of tissues or vascular or lymphatic flows in combination or not with the injection of contrast agents, as well as in the recognition and measurement of contrast agent perfusion.
  • the sequence of images acquired with time after the injection of contrast agents is encoded with the method as described with reference to Figure 3.
  • the teaching database for the processing algorithm includes behavior types, e.g. arterial blood flow or lymphatic or venous flow and stationary tissues and/or tissues of vessel walls. Then, the recognition results are displayed e.g. by appropriately coloring the pixels relating to the different types.
  • the same processing unit i.e. the hardware wherein the processing software is loaded, may perform any of the above recognition processing procedures, by simply providing the processing software with the proper teaching database for the images to be processed and by obviously encoding the images to be processed.
  • the processing method of the invention does not substantially change.
  • a teaching database shall be generated in which known images with or without artifacts are encoded, by assigning the artifact type to artifact-reproducing pixels and the correct pixel type to correct object reproducing pixels. Once an image or a sequence of images has been recognized, it may be easily corrected by suppressing artifact-related pixels or by assigning to artifact-related pixels the tissue types or qualities which they might have with reference, for example, to surrounding pixels.
  • Defocusing may be corrected in a similar manner.
  • the processing method of the invention may be also advantageously used to generate images composed of individual images of the same subject as obtained by different techniques, e.g. Nuclear Magnetic Resonance imaging, ultrasound imaging and x-ray imaging.
  • the teaching database will contain pixel encoding vectors for all three images obtained with the three different techniques, with tissue types or qualities corresponding to said pixels being uniquely associated to said vectors.
  • image portions are uniquely associated to specific tissue types and said well-defined portions may be displayed in overlaid positions or other combined arrangements within a single image.
  • An additional application of the inventive recognition method, in combination with imaging methods, particularly for diagnostic purposes, such as ultrasound or Nuclear Magnetic Resonance imaging methods, consists in that imaging is performed with less accurate but considerably faster imaging sequences or techniques and that the displayed image is an image processed with the recognition method of this invention.
  • This arrangement may be very useful particularly for ultrasound or Nuclear Magnetic Resonance imaging, which require relatively long imaging times, in certain situations, and provides apparent advantages.
  • the processing method of the invention also provides considerable advantages in the recognition of tissue types, like potentially diseased tissues, for example tumor tissues at very early stages.
  • tissue types like potentially diseased tissues, for example tumor tissues at very early stages.
  • x-ray mammography for example, is performed with spatial resolutions of about 7 micron. Therefore, these images or the data associated thereto have such a resolution that different tissue types may be discriminated at very early growth levels, in groups of a few cells. Nevertheless, the human eye only has a spatial resolution of 100 micron. Hence the considerable imaging resolution cannot be currently used.
  • the method of this invention does not have any spatial resolution limits, except those possibly associated to image digitizing means.
  • the spatial resolution limits of the human eye may be lowered by using appropriate image digitizing or digital sampling means, to potentially reach the spatial resolution available at the imaging stage.
  • a digitized virtual image which consists of a two-dimensional, three-dimensional or multi-dimensional set, in which the virtual image is composed of image data for image unit dots relating to the spatial resolution below the one of the human eye.
  • the processing method essentially includes the same steps as described above, i.e. generating pixel encoding vectors and quality and type recognition processing, particularly for tissues, as described above.
  • the different object types or object qualities may be highlighted by appropriately changing the aspect of the pixel related thereto, e.g. by a suitable differentiated coloring arrangement.
  • the pixel data matrix may be reconstructed, and said data may be used to control, for instance, a printer and/or a display screen.
  • the printer or display may be controlled in such a manner as to allow the individual pixels to be also displayed at the resolution of the human eye, e.g. by using an image variation in which the data of each pixel of the high resolution image, i.e. having a resolution below that of the human eye is used to control a unit group of pixels of the display or printer, whose pixels take the same aspect as the corresponding pixel to be displayed.
  • the image is inflated rather than enlarged, each high resolution pixel being represented by displaying a pixel submatrix which comprises a sufficient number of pixels to generate an image portion having a resolution of the same order of magnitude as the one of the human eye or higher.
  • the 196 pixels of the unit group are controlled to assume the same aspect as assigned to the corresponding high definition pixel, thereby generating an image point which is visible to the human eye.
  • the above displaying steps allow to generate unit groups of high definition pixels which may also have a greater or smaller number of pixels, substantially corresponding to a greater or smaller enlargement of the individual high resolution pixels.
  • the enlargement factor may be also user-preset by possibly allowing to delimit or define an image portion whereto the enlargement displaying step is to be applied and to modify said image portion to be enlarged for successive and different enlargement steps with different enlargement and resolution factors.
  • multiple application fields may be provided, with particular reference to diagnostic image processing and to healthy or normal tissue or diseased tissue recognition, especially for benign and malignant tumor tissues.
  • the improvement as described above allows to analyze the tissue type and to obtain indications regarding the presence of benign or malignant tumor tissues at very early stages which, at a resolution of 7 micron, are composed of a very small number of cells.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
EP02425141A 2002-03-11 2002-03-11 Méthode de codage de points d'image et méthode de traitement d'image destinée à la reconnaissance qualitative d'un objet reproduit à l'aide d'un ou de plusieurs pixels Withdrawn EP1345154A1 (fr)

Priority Applications (8)

Application Number Priority Date Filing Date Title
EP02425141A EP1345154A1 (fr) 2002-03-11 2002-03-11 Méthode de codage de points d'image et méthode de traitement d'image destinée à la reconnaissance qualitative d'un objet reproduit à l'aide d'un ou de plusieurs pixels
PCT/EP2003/002400 WO2003077182A1 (fr) 2002-03-11 2003-03-10 Procede servant a coder des pixels d'images, procede servant a traiter des images et procede servant a traiter des images dans le but de la reconnaissance qualitative de l'objet reproduit par un ou plusieurs pixels
KR10-2004-7014304A KR20040102038A (ko) 2002-03-11 2003-03-10 이미지 픽셀 인코딩 방법, 이미지 처리 방법 및 하나이상의 이미지 픽셀들에 의해 재생된 객체의 특성 인식을위한 이미지 처리 방법
EP03711951A EP1483721A1 (fr) 2002-03-11 2003-03-10 Procede servant a coder des pixels d'images, procede servant a traiter des images et procede servant a traiter des images dans le but de la reconnaissance qualitative de l'objet reproduit par un ou plusieurs pixels
AU2003218712A AU2003218712A1 (en) 2002-03-11 2003-03-10 A method for encoding image pixels, a method for processing images and a method for processing images aimed at qualitative recognition of the object reproduced by one or more image pixels
US10/516,879 US7672517B2 (en) 2002-03-11 2003-03-10 Method for encoding image pixels a method for processing images and a method for processing images aimed at qualitative recognition of the object reproduced by one or more image pixels
CNB038056739A CN100470560C (zh) 2002-03-11 2003-03-10 图像像素编码方法和图像处理方法
JP2003575324A JP4303598B2 (ja) 2002-03-11 2003-03-10 画素の符号化法、画像処理法と、1以上の画素により再現されるオブジェクトの定性的な認識を目的とする画像処理法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP02425141A EP1345154A1 (fr) 2002-03-11 2002-03-11 Méthode de codage de points d'image et méthode de traitement d'image destinée à la reconnaissance qualitative d'un objet reproduit à l'aide d'un ou de plusieurs pixels

Publications (1)

Publication Number Publication Date
EP1345154A1 true EP1345154A1 (fr) 2003-09-17

Family

ID=27763487

Family Applications (2)

Application Number Title Priority Date Filing Date
EP02425141A Withdrawn EP1345154A1 (fr) 2002-03-11 2002-03-11 Méthode de codage de points d'image et méthode de traitement d'image destinée à la reconnaissance qualitative d'un objet reproduit à l'aide d'un ou de plusieurs pixels
EP03711951A Withdrawn EP1483721A1 (fr) 2002-03-11 2003-03-10 Procede servant a coder des pixels d'images, procede servant a traiter des images et procede servant a traiter des images dans le but de la reconnaissance qualitative de l'objet reproduit par un ou plusieurs pixels

Family Applications After (1)

Application Number Title Priority Date Filing Date
EP03711951A Withdrawn EP1483721A1 (fr) 2002-03-11 2003-03-10 Procede servant a coder des pixels d'images, procede servant a traiter des images et procede servant a traiter des images dans le but de la reconnaissance qualitative de l'objet reproduit par un ou plusieurs pixels

Country Status (7)

Country Link
US (1) US7672517B2 (fr)
EP (2) EP1345154A1 (fr)
JP (1) JP4303598B2 (fr)
KR (1) KR20040102038A (fr)
CN (1) CN100470560C (fr)
AU (1) AU2003218712A1 (fr)
WO (1) WO2003077182A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006120195A1 (fr) * 2005-05-12 2006-11-16 Bracco Imaging S.P.A. Procede de codage des pixels ou des voxels d'une image numerique et procede de traitement d'images numeriques
DE102005039189A1 (de) * 2005-08-18 2007-02-22 Siemens Ag Bildauswertungsverfahren für zweidimensionale Projektionsbilder und hiermit korrespondierende Gegenstände
US7430313B2 (en) 2004-05-04 2008-09-30 Zbilut Joseph P Methods using recurrence quantification analysis to analyze and generate images
DE102009031141B3 (de) * 2009-06-30 2010-12-23 Siemens Aktiengesellschaft Ermittlungsverfahren für ein farbkodiertes Auswertungsbild sowie korrespondierende Gegenstände
IT202000023257A1 (it) 2020-10-02 2022-04-02 Esaote Spa Sistema e metodo per l’imaging diagnostico di acqua e grasso

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1611452A1 (fr) * 2003-03-31 2006-01-04 Koninklijke Philips Electronics N.V. Procede de visualisation de perfusion par resonance magnetique
AR047692A1 (es) * 2003-07-10 2006-02-08 Epix Medical Inc Imagenes de blancos estacionarios
US7215802B2 (en) * 2004-03-04 2007-05-08 The Cleveland Clinic Foundation System and method for vascular border detection
WO2006078902A2 (fr) * 2005-01-19 2006-07-27 Dermaspect, Llc Dispositifs et procedes pour identifier et surveiller les changements intervenant dans une region suspecte d'un patient
KR100752333B1 (ko) * 2005-01-24 2007-08-28 주식회사 메디슨 3차원 초음파 도플러 이미지의 화질 개선 방법
JP5133505B2 (ja) * 2005-06-24 2013-01-30 ジーイー・メディカル・システムズ・グローバル・テクノロジー・カンパニー・エルエルシー 画像判定装置およびx線ct装置
JP2009511163A (ja) 2005-10-14 2009-03-19 アプライド リサーチ アソシエイツ エヌゼット リミテッド 表面特徴を観察する方法とその装置
FR2923339B1 (fr) * 2007-11-05 2009-12-11 Commissariat Energie Atomique Procede de lecture d'une matrice bidimensielle de pixels et dispositif pour la mise en oeuvre d'un tel procede
JP5003478B2 (ja) * 2007-12-28 2012-08-15 Nkワークス株式会社 キャプチャーソフトウエアプログラムおよびキャプチャー装置
JP5039932B2 (ja) * 2007-12-28 2012-10-03 Nkワークス株式会社 キャプチャーソフトウエアプログラムおよびキャプチャー装置
TWI451749B (zh) * 2009-03-10 2014-09-01 Univ Nat Central Image processing device
US8423117B2 (en) * 2009-06-22 2013-04-16 General Electric Company System and method to process an acquired image of a subject anatomy to differentiate a portion of subject anatomy to protect relative to a portion to receive treatment
TWI463417B (zh) * 2010-08-20 2014-12-01 Hon Hai Prec Ind Co Ltd 圖像處理設備及圖像特徵向量提取與圖像匹配方法
US9179844B2 (en) 2011-11-28 2015-11-10 Aranz Healthcare Limited Handheld skin measuring or monitoring device
WO2013105793A2 (fr) * 2012-01-09 2013-07-18 Ryu Jungha Procédé d'édition d'image de caractères dans un appareil d'édition d'image de caractères et support d'enregistrement sur lequel est enregistré un programme pour exécuter le procédé
CN102902972B (zh) * 2012-09-14 2015-04-29 成都国科海博信息技术股份有限公司 人体行为特征提取方法、系统及异常行为检测方法和系统
EP3545990B1 (fr) * 2013-04-22 2023-05-24 Sanofi-Aventis Deutschland GmbH Dispositif supplémentaire pour collecter des informations concernant l'utilisation d'un dispositif d'injection
US9262443B2 (en) * 2013-05-15 2016-02-16 Canon Kabushiki Kaisha Classifying materials using texture
US20160147791A1 (en) * 2013-07-09 2016-05-26 Jung Ha RYU Method for Providing Sign Image Search Service and Sign Image Search Server Used for Same
US10049429B2 (en) 2013-07-09 2018-08-14 Jung Ha RYU Device and method for designing using symbolized image, and device and method for analyzing design target to which symbolized image is applied
FR3012710B1 (fr) * 2013-10-29 2017-02-10 Commissariat Energie Atomique Procede de traitement de signaux delivres par des pixels d'un detecteur
EP3081955A1 (fr) * 2015-04-13 2016-10-19 Commissariat A L'energie Atomique Et Aux Energies Alternatives Procédé d'imagerie par résonance magnétique pour déterminer des indices de signature d'un tissu observé à partir de modèles de signaux obtenus par impulsion de gradient de champ magnétique variable
CA3021697A1 (fr) 2016-04-21 2017-10-26 The University Of British Columbia Analyse d'image echocardiographique
US10013527B2 (en) 2016-05-02 2018-07-03 Aranz Healthcare Limited Automatically assessing an anatomical surface feature and securely managing information related to the same
GB2554641A (en) * 2016-09-29 2018-04-11 King S College London Image processing
US11116407B2 (en) 2016-11-17 2021-09-14 Aranz Healthcare Limited Anatomical surface assessment methods, devices and systems
EP3606410B1 (fr) 2017-04-04 2022-11-02 Aranz Healthcare Limited Procédés, dispositifs et systèmes d'évaluation de surface anatomique
EP3400878B1 (fr) 2017-05-10 2020-03-25 Esaote S.p.A. Procédé de localisation de cibles indépendante de la posture dans des images diagnostiques acquises par des acquisitions multimodales et système permettant de mettre en oevre ledit procédé
US10751029B2 (en) * 2018-08-31 2020-08-25 The University Of British Columbia Ultrasonic image analysis
JP7007324B2 (ja) * 2019-04-25 2022-01-24 ファナック株式会社 画像処理装置、画像処理方法、及びロボットシステム
CN115049750B (zh) * 2022-05-31 2023-06-16 九识智行(北京)科技有限公司 基于八叉树的体素地图生成方法、装置、存储介质及设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4776025A (en) * 1985-08-27 1988-10-04 Hamamatsu Photonics Kabushiki Kaisha Neighbor image processing exclusive memory
US6238342B1 (en) * 1998-05-26 2001-05-29 Riverside Research Institute Ultrasonic tissue-type classification and imaging methods and apparatus
US6324300B1 (en) * 1998-06-24 2001-11-27 Colorcom, Ltd. Defining color borders in a raster image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4776025A (en) * 1985-08-27 1988-10-04 Hamamatsu Photonics Kabushiki Kaisha Neighbor image processing exclusive memory
US6238342B1 (en) * 1998-05-26 2001-05-29 Riverside Research Institute Ultrasonic tissue-type classification and imaging methods and apparatus
US6324300B1 (en) * 1998-06-24 2001-11-27 Colorcom, Ltd. Defining color borders in a raster image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PETRICK N ET AL: "AUTOMATED DETECTION OF BREAST MASSES ON MAMMOGRAMS USING ADAPTIVE CONTRAST ENHANCEMENT AND TEXTURE CLASSIFICATION", MEDICAL PHYSICS, AMERICAN INSTITUTE OF PHYSICS. NEW YORK, US, vol. 23, no. 10, 1 October 1996 (1996-10-01), pages 1685 - 1696, XP000678022, ISSN: 0094-2405 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7430313B2 (en) 2004-05-04 2008-09-30 Zbilut Joseph P Methods using recurrence quantification analysis to analyze and generate images
US8422820B2 (en) 2004-05-04 2013-04-16 Rush University Medical Center Methods using recurrence quantification analysis to analyze and generate images
WO2006120195A1 (fr) * 2005-05-12 2006-11-16 Bracco Imaging S.P.A. Procede de codage des pixels ou des voxels d'une image numerique et procede de traitement d'images numeriques
CN101189641B (zh) * 2005-05-12 2012-05-02 布雷克成像有限公司 编码数字图像的像素或体素的方法及处理数字图像的方法
DE102005039189A1 (de) * 2005-08-18 2007-02-22 Siemens Ag Bildauswertungsverfahren für zweidimensionale Projektionsbilder und hiermit korrespondierende Gegenstände
US7729525B2 (en) 2005-08-18 2010-06-01 Siemens Aktiengesellschat Image evaluation method for two-dimensional projection images and items corresponding thereto
DE102005039189B4 (de) * 2005-08-18 2010-09-09 Siemens Ag Bildauswertungsverfahren für zweidimensionale Projektionsbilder und hiermit korrespondierende Gegenstände
DE102009031141B3 (de) * 2009-06-30 2010-12-23 Siemens Aktiengesellschaft Ermittlungsverfahren für ein farbkodiertes Auswertungsbild sowie korrespondierende Gegenstände
US8948475B2 (en) 2009-06-30 2015-02-03 Siemens Aktiengesellschaft Method for computing a color-coded analysis image
IT202000023257A1 (it) 2020-10-02 2022-04-02 Esaote Spa Sistema e metodo per l’imaging diagnostico di acqua e grasso
US11971466B2 (en) 2020-10-02 2024-04-30 Esaote S.P.A. System and method for fat and water diagnostic imaging

Also Published As

Publication number Publication date
US7672517B2 (en) 2010-03-02
EP1483721A1 (fr) 2004-12-08
KR20040102038A (ko) 2004-12-03
CN100470560C (zh) 2009-03-18
US20060098876A1 (en) 2006-05-11
WO2003077182A1 (fr) 2003-09-18
JP2005519685A (ja) 2005-07-07
AU2003218712A1 (en) 2003-09-22
CN1639725A (zh) 2005-07-13
JP4303598B2 (ja) 2009-07-29

Similar Documents

Publication Publication Date Title
US7672517B2 (en) Method for encoding image pixels a method for processing images and a method for processing images aimed at qualitative recognition of the object reproduced by one or more image pixels
US4945478A (en) Noninvasive medical imaging system and method for the identification and 3-D display of atherosclerosis and the like
US10733788B2 (en) Deep reinforcement learning for recursive segmentation
CN110506278A (zh) 隐空间中的目标检测
EP1302163A2 (fr) Dispositif et méthode pour le calcul d'un indice des débits sanguins locaux
US9974490B2 (en) Method and device for segmenting a medical examination object with quantitative magnetic resonance imaging
US7136516B2 (en) Method and system for segmenting magnetic resonance images
JP2008521468A (ja) デジタル医療画像分析
EP3703007A2 (fr) Caractérisation de tissu tumoral au moyen d'une imagerie par résonance magnétique multiparamétrique
CN106780436B (zh) 一种医疗影像显示参数确定方法及装置
KR102030533B1 (ko) 근감소증 분석지원을 위한 인공 신경망 기반의 인체 형태 분석법을 채용하는 영상 처리 장치 및 이를 이용한 영상 처리 방법
US6205350B1 (en) Medical diagnostic method for the two-dimensional imaging of structures
CN111815735B (zh) 一种人体组织自适应的ct重建方法及重建系统
CN113159040B (zh) 医学图像分割模型的生成方法及装置、系统
CN109949288A (zh) 肿瘤类型确定系统、方法及存储介质
CN112334990A (zh) 自动宫颈癌诊断系统
CN116434918A (zh) 医学图像处理方法及计算机可读存储介质
US10663547B2 (en) Automatic detection and setting of magnetic resonance protocols based on read-in image data
CN115249279A (zh) 医学图像处理方法、装置、计算机设备和存储介质
JP2023114463A (ja) 表示装置、方法およびプログラム
KR20020079742A (ko) 고품질영상의 가시적 표시를 위한 상사치데이터의컨벌루젼필터링
US7218767B2 (en) Method of improving the resolution of a medical nuclear image
US20240169745A1 (en) Apparatus and method for vertebral body recognition in medical images
Kukar et al. Supporting diagnostics of coronary artery disease with neural networks
BENJAMIN Brain Morphology Quantification for Large MRI Cohorts using Convolutional Neural Networks

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

AKX Designation fees paid
REG Reference to a national code

Ref country code: DE

Ref legal event code: 8566

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20040318