MXPA98005013A - Improved processing of images and signals of ultrasound intravascu - Google Patents

Improved processing of images and signals of ultrasound intravascu

Info

Publication number
MXPA98005013A
MXPA98005013A MXPA/A/1998/005013A MX9805013A MXPA98005013A MX PA98005013 A MXPA98005013 A MX PA98005013A MX 9805013 A MX9805013 A MX 9805013A MX PA98005013 A MXPA98005013 A MX PA98005013A
Authority
MX
Mexico
Prior art keywords
image
detector
images
transmitter
ultrasound signals
Prior art date
Application number
MXPA/A/1998/005013A
Other languages
Spanish (es)
Inventor
Richter Jacob
Nachtomy Ehud
Original Assignee
Medinol Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Medinol Ltd filed Critical Medinol Ltd
Publication of MXPA98005013A publication Critical patent/MXPA98005013A/en

Links

Abstract

A device and method for intravascular ultrasound imaging. A catheter including an ultrasonic device is inserted, and can be moved through a lumen of the body. The device transmits ultrasonic signals, and detects the reflected ultrasound signals that contain information related to the body's lumen. A processor coupled with the catheter is programmed to derive a first image or series of images, and a second image or series of images. , from the detected signals. The processor is also programmed to compare the second image or series of images with the first image or series of images, respectively. The processor can be programmed to stabilize the second image in relation to the first image and to limit the drag. The processor can also be programmed to monitor the first and second images, to determine cardiovascular periodicity, image quality, temporal change, and vessel movement. You can also check the first series of images with the second series of images

Description

IMPROVED PROCESSING OF INTRAVASCULAR ULTRASOUND IMAGES AND SIGNALS Field of the invention The present invention relates to a device and method for the improved processing of intravascular ultrasound images and signals ("IVUS"). and tri s specifically, to a device and method for processing the intravascular ultrasound image and signal information, which will improve the quality and usefulness of the intravascular ultrasound images.
BACKGROUND INFORMATION Intravascular ultrasound images and signals are derived from an ultrasonic energy beam projected by devices such as a transducer or array of transducers located around, together, or at the tip of a catheter inserted into a blood vessel. An ultrasound beam of the device is continuously rotating inside the blood vessel forming an internal cross sectional image of 360 °, that is, the image is formed in a transverse plane (X-Y). Depending on the configuration of the specific apparatus, the image can be derived either from the same transverse plane of the apparatus or from a transverse plane that is slightly forward (ie, distal) of the transverse plane of the apparatus. If the catheter moves in and along the blood vessel (ie, along the e e Z), images of several segments (series of consecutive internal transverse cuts) of the vessel can be formed and visualized. Intravascular ultrasound can be used in all types of blood vessels, including, but not limited to arteries, veins and other peripheral vessels, and in all parts of a body. The received ultrasonic signal (detected) is originally an analog signal. This signal is processed using analog and digital methods so that eventually they form a set of vectors that comprise digitized information. Each vector represents the ultrasonic response of a different angular sector of the vessel, i.e., a section of the blood vessel. The number of informative elements of each vector (axial sampling resolution) and the number of vectors used to track a full cross section (lateral sampling resolution) of the vessel may vary depending on the type of system used. The digitized vectors can initially be placed in a two-dimensional rule or matrix having polar coordinates, that is, A (r, D). In this polar matrix, for example, the X axis corresponds to the coordinate r and the Y axis corresponds to the coordinate D. Each value of the matrix is a value (that varies from 0-255 if the system is 8 bits) that represents the strength of the ultrasonic response of this place. This polar matrix is usually not transferred to a visual display because the resulting image would not be easily interpreted by a physician. The information stored in the polar matrix A (r,?) Usually undergoes several processing steps and is interpolated in Cartesian coordinates, for example, X and Y coordinates (A (X, Y)) that are more easily interpreted by a physician. Thus, the X and Y axes of the matrix A (X, Y) will correspond to the Cartesian representation of the cross section of the vessel. The information in the Cartesian matrix possibly experiences more processes and is eventually visualized for a physician's analysis. The images are acquired and displayed at a variable rate, depending on the system. Some systems can acquire and display images under the visual display of video, for example, up to 30 images per second. Examination of intravascular ultrasound of a segment of a body lumen, that is, the vessel usually works by placing the catheter more distant (ie, downstream) to the segment to be checked and then the catheter is pulled slowly (traction) along the body lumen (Z-axis) so that the successive images forming the segment are continuously displayed. In many cases the catheter is connected to a mechanical traction device that pulls the catheter at a constant speed (ie, the typical speed is approximately 0.5 - 1 mm / sec.). In these days, the technique of the intravascular ultrasound imaging systems described above is generally used to show a cross-sectional image of a lumen of the body, eg, blood vessel. These systems are deficient because they do not include any form of image stabilization to compensate for movements of the catheter and / or lumen of the body, for example, blood vessel. It is known that during the imaging of intravascular ultrasound of a body lumen, there is always movement exhibited by the catheter and / or the lumen of the body. This movement can be displayed in the transverse plane (X, Y), along with the vessel axis (Z axis) or a combination of those movements. The imaging catheter may also be hesitant in relation to the vessel, so that the image formation plane is not perpendicular to the Z axis (this movement should be called angulation). These movements are caused, among other things, by the beating of the heart, blood and / or other fluid through the lumen, movement of the vessel, forces applied by the doctor and other forces caused by the physiology of the patient. Currently, in intravascular ultrasound systems, when the imaging catheter is stationary, or when slow manual or mechanical traction is performed, the relative movement between the catheter and the lumen is the primary factor for the change in appearance between the catheters. successive images, that is, as seen in the visual display and / or on film or video. This change in appearance occurs because the rate of change of an image due to movements is greater than the rate of change in the actual morphology due to traction. Stabilization occurs when images include compensation for relative movement between the catheter and the lumen in successive images. Because none of the currently used intravascular ultrasound systems perform stabilization, there is no compensation or correction of the relative movements between the catheter and the lumen. As a result, the morphological characteristics are constantly moving or rotating, that is, in the visualization and / or in the film or video. This makes it difficult for the physician to correctly interpret the morphology in a dynamic visual display of intravascular ultrasound. Moreover, when non-stabilized intravascular ultrasound images are fed as an input to a processing algorithm such as 3D reconstruction or different types of filters that process a set of successive images, this can lead to degraded execution, to a diagnosis incorrect or inaccurate determinations. Current intravascular ultrasound imaging devices or catheters may occasionally have malfunctions of mechanical or electronic origin. This can cause unfolded images to display recognized or unrecognized aberrant particles and obscure the actual morphology. Currently there are no automatic methods to determine if the images have this type of aberrant particles that hinder the analysis of images of the vessel or the lumen of the body. The behavior of cardiovascular function is usually periodic. The detection of this periodicity and the ability to establish a correlation between an image and the time phase in the cardiac cycle to which it belongs is known as cardiac impulse selection. Currently, heart pulse selection is performed using an external signal, usually an electrocardiogram. However, the electrocardiogram impulse selection requires both the acquisition of the electrocardiogram signal and its intercalation (or synchronization) with the intravascular ultrasound image. This requires additional hardware / software. The morphological characteristics in intravascular ultrasound images of blood vessels can be separated into three general categories: The lumen, that is, the area through which blood or other body fluids flow; the layers of the glass; and the outside, that is, the tissue or the morphology outside the vessel. In most intravascular ultrasound films (imaging), blood is characterized by a rapid specular change pattern. The outside of the vessel also alternates with a high temporal frequency. Currently, the temporal behavior of pixels and their texture attributes are not automatically monitored. Movement of the vessel in the context of lumens of the body, eg, blood vessel, is defined as the change in lumen caliber, eg, vessel. This change can be made by natural circumstances or under induced conditions. The movement of the vessel can have a dynamic component, that is, dynamic change of lumen dimensions, for example, vessel caliber (contraction and dilation) during the cardiovascular cycle, and a static baseline component, i.e., a change in the caliber of the basic line of the lumen, for example, the vessel. The movement of the vessel can be expressed as quantitative physiological parameters indicating the ability of the lumen, for example, that the vessel change its caliber under certain conditions. These types of parameters have a medical and diagnostic importance today and possibly future in the provision of information about the state of the lumen, for example, the vessel and the effect of the therapy performed. Intravascular ultrasound can be used to monitor the movement of the vessel because it provides an image of the caliber of the baseline lumen and its dynamic changes. Additionally, intravascular ultrasound can be used to monitor whether the movement of the vessel is global (uniform), that is, where the entire cross section of the lumen contracts / dilates in the same magnitude and direction. The intravascular ultrasound can also be used to determine if the movement of the vessel is not uniform, which leads to local changes in the caliber of the lumen, that is, different parts of the cross section of the lumen behave differently. Currently, all types of vessel movement that are monitored by intravascular ultrasound are done manually, which is tedious, slow and avoids the monitoring of vessel movement in real time. The interpretation of intravascular ultrasound images is achieved through the analysis of the composition of static images and the supervision of their temporal behavior. The majority of intravascular ultrasound images can be divided into three basic parts. The innermost part is the fluid passage of the lumen, that is, the cavity through which matter, that is, blood, flows. Around the fluid passage is the vessel, which may include vessels of blood and any other body vessel, which is composed of multiple layers of tissue (and plaque, if ill). Outside the vessel another tissue that may belong to the morphology of the surroundings, for example, the heart in an image of the coronary vessel. When the intravascular ultrasound film is viewed dynamically, ie in film format, the pixels corresponding to the material flowing through the vessel and "The morphology outside the vessel shows a different temporal behavior than the vessel itself, For example, in most intravascular ultrasound films, blood flowing through the vessel is characterized by a frequently alternating specular pattern. The visual behavior of the pixels in the dynamic images of intravascular ultrasound is not automatically monitored in current visual displays of intravascular ultrasound, if they are designed inside the system, temporary changes of high frequencies are suppressed. Averaging several images However, this sometimes can not suppress the appearance of features with high amplitudes, ie bright gray values, and it also has a blurry effect.The size of the lumen fluid passage is a very important diagnostic parameter When it is required for diagnosis , it is manually determined by a doctor, for example. This is achieved by drawing the outline of the edges of the superimposed fluid passage in a static image, for example, frozen in video or in the visual display of a machine. This method of manual extraction is delayed, imprecise, and subject to bias. Currently there is a commercial processing software for the automatic extraction of the fluid passage. Either way, they are based on the gray-value composition of the static images and do not take into account the different temporal behavior exhibited by the material, for example, the blood that flows through the passageway as opposed to the layers of the vessels. During the treatment of the vessels, it is common practice to repeat intravascular ultrasound traction tests on the same segments of the vessels. For example, a typical situation is to first review the segment in question, assess the disease (if any), remove the intravascular ultrasound catheter, consider alternative therapies., Carry out the therapy, for example, PTCA- " balloon "or clipping, and immediately afterwards re-examine the treated segment using intravascular ultrasound to estimate the results of the therapy. To properly evaluate the results and fully appreciate the effect of the therapy performed, it is desirable to compare the images of the segments treated before and after that reflect transverse slices of the vessel that are in the same place along the Z axis of the vessel ( that is, the corresponding segments). In order to achieve this comparison, it will be necessary to determine which places in the films of the previous treatment of intravascular ultrasound imaging and of the subsequent intravascular ultrasound treatment correspond to one another. This procedure, called pairing (registration) allows an accurate comparison of intravascular ultrasound images before and after treatment. Currently, pairing is usually carried out by viewing the films of the anterior and posterior segments of intravascular ultrasound traction one after the other or side by side using identifiable anatomical landmarks to locate the sequences that visually correspond to the one to the other. other. This method is extremely imprecise and difficult to achieve considering that the images are unstable and that very often they rotate and / or move around in the sampling due to the absence of stability and because many of the anatomical reference points found in the traction film Intravascular ultrasound of the previous treatment segment may be disturbed or changed as a result of the therapy carried out in the vessel. In addition, the orientation and appearance of the vessel may change as a result of the different orientations and relative positions of the intravascular ultrasound catheter relative to the vessel due to its removal and re-inson after the therapy ends. The mating that takes place is manual and depends mainly on a manual visual identification, which can be imprecise and very delayed.
SUMMARY OF THE INVENTION The present invention solves the problems related to intravascular ultrasound imaging systems that are currently on the market and with the previous technique providing physicians with precise images and sequences of images of the studied morphology, allowing so a more accurate diagnosis and evaluation. The present invention processes intravascular ultrasound image and signal information to remove distortions and inaccuracies caused by different types of movement in both the catheter and the lumen of the body. This results in the improvement of the quality and usefulness of the intravascular ultrasound images. One of the advantages provided by the present invention is that individual intravascular ultrasound images are stabilized with respect to the previous images, thereby removing the negative effects of any subsequent processing of multiple images. If the movements of each image are of the transversal type, then it is possible that the movement is completely compensated in each acquired image. The present invention also allows for volume reconstruction algorithms to accurately reproduce the morphology since the movement of the body lumen is stabilized. The present invention is applicable and usable for any type of system in which there is a need to stabilize images (intravascular or other ultrasound) because a probe (e.g., ultrasonic or other) that moves through a lumen undergoes relative motion (ie, of the probe and / or the lumen). The present invention provides the detection of an ultrasonic signal emitted by an ultrasonic apparatus in a body lumen, the conversion of the received analog signal converted into polar coordinates (A (r,?)), The stabilization in the polar field, converting the polar coordinates stabilized in Cartesian coordinates (A (X, Y)), stabilization in the Cartesian field and then transfer the stabilized image as Cartesian coordinates for a visual display. Stabilized images, whether they are Cartesian or polar coordinates, can also be processed before sampling or may not be displayed. The conversion to Cartesian coordinates and / or the stabilization in the Cartesian field can be done at any time before or after stabilization in the polar field. Additionally, polar or Cartesian stabilization can be omitted, depending on the change detected in the image and / or other factors. In addition, additional forms of stabilization may be included or omitted depending on the detected change and / or other factors. For example, the stabilization of rigid movement can be introduced to compensate for the rotary movement (angular) or the movement of the global vessel (the expansion or contraction in the direction r) in the polar field and / or the Cartesian displacement (direction X and / or Y) in the Cartesian field. The rigid transverse movement between the representations of successive images is called a "displacement", that is, a uniform movement of all the morphological characteristics in the plane of the image. To stabilize the intravascular ultrasound images, the first step carried out is "the detection and evaluation of the displacement". This is where the offset (if any) between each pair of successive images is evaluated and detected. The system can use a processor to perform an operation on a pair of successive intravascular ultrasound images to determine if there has been a displacement between these images. The processor can use a single algorithm or you can choose between several algorithms to use to make this determination. The system uses the / s algorithm to simulate a displacement in an image and then compares this displaced image with its previous image. Comparisons between images are known as zoom operations that can also be known in the prior art as pairing. The system carries out a single approach operation for each displacement. The results of this series of approach operations are evaluated for, determine the place (direction and magnitude) of the displaced image that bears the closest resemblance to the previous image not displaced. An image, of course, can be compared in the same way with its successor image. After the true offset is determined, the present image becomes the predecessor image, the next image becomes the present image and this operation is repeated. Using displacement detection and evaluation, the system determines the type of transverse displacement, for example, rotation, expansion, displacement contraction (Cartesian), etc., together with the direction and magnitude of the displacement. The next step is the "displacement implementation". This is where the system performs an operation or a series of operations of successive images of intravascular ultrasound to stabilize each of the images with respect to its adjacent predecessor image. This stabilization uses one or multiple "reverse displacements" which exist in order to cancel the detected displacement. The system may include an algorithm or may choose among several algorithms to be used to implement each "reverse offset". The logic with which you decide which reverse offset is actually going to be implemented in an image, before feeding with more visual display or processing, is called "displacement logic". As soon as the intravascular ultrasound images are stabilized for the desired types of motion detected, the system can transfer the information from the Cartesian (or polar) image for further processing and finally for visual display where the results of the stabilization can be reviewed by example, by a doctor. Alternatively, the stabilization may be invisible to the user in the sense that the stabilization may be used before some other processing steps, after which the resulting images are projected in the visual display in their original position or unstabilized orientation. It is possible that the transverse movement between images is not rigid but of a local nature, that is, different portions of the image will exhibit movement in different directions and magnitudes. In that case, the stabilization methods described above and other types of methods can be implemented on a local basis to compensate for this type of movement.
The present invention provides for the detection of cardiac periodicity using information derived solely from intravascular ultrasound images without the need for an external signal such as the electrocardiogram. This process involves approach operations that are also partially used in the stabilization process. An important function of periodicity detection (ie, selection of cardiac impulses), when the catheter is stationary or when traction of controlled intravascular ultrasound is performed, is that it allows the selection of images belonging to the same phase in successive cardiac cycles. The selection of images based on heart pulse selection will allow the stabilization of all types of periodic movement (including transverse, Z axis and angulations) in the sense that images of the same phase are selected in successive heartbeats. You can see these images of intravascular ultrasound, for example, and any vacuum created between them can be compensated by filling them and showing interpolated images. The intravascular ultrasound images selected by this operation can be sent forward for further processing. You can also use zoom operations used for the detection of the periodicity to monitor the quality of the image and to indicate aberrant particles associated with the malfunctioning of the image processing and training apparatus. The operations used to evaluate the displacement can automatically indicate the movement of the vessel. This can serve as a stabilization process because the movement of the vessel causes the successive images to differ due to the change in the caliber in the vessel. If the images are stabilized by vessel movement, then this change is compensated. Alternatively, the information about the change in caliber can be visualized since it could have physiological importance. The monitoring of vessel movement is carried out by applying approach operations to the successive images using their polar representations, that is, A (r, 0). These operations can be applied between complete images or between polar individual corresponding vectors (of successive images), depending on the type of information desired. Since the movement of the global vessel is expressed as a uniform change in the caliber of the lumen, it can be estimated by the approach operation, which takes into account the complete polar image. In general, any convenient operation for global stabilization in the polar representation can be used to assess the overall movement of the vessel. Under certain circumstances, during intravascular ultrasound imaging, there may be non-uniform vessel movement, ie, movement only in certain sections of the intravascular ultrasound image that corresponds to specific locations in the body lumen. This can occur, for example, where an artery has plaque formed somewhere, thus allowing the artery to expand or contract only in areas that do not have plaque. When this type of movement is detected, the system can divide the ultrasound signals by representing cross sections of the body lumen in multiple segments that are processed individually with respect to a corresponding segment in the adjacent image using certain algorithm (s). The resulting intravascular ultrasound images may be visualized. This form of stabilization can be used individually or in conjunction with the stabilization techniques discussed previously. Alternatively, the information about the local change in vessel caliber can be visualized since it could have physiological importance. The temporal behavior of the pixels and their texture attributes can serve for: the improvement of the visualization; and automatic segmentation (lumen extraction). The performance of the deployment and segmentation enhancement processes can be improved if they are monitored in a stabilized image environment. In accordance with the present invention, the temporal behavior of intravascular ultrasound images can be monitored automatically. The information extracted by this type of monitoring can be used for the improvement of the accuracy of the interpretation of the intravascular ultrasound image. The human perception of the vessel, both static images and dynamic images, for example, images shown in the form of cinema, can be improved by filtering and suppressing fast-changing characteristics such as matter, for example, blood flowing through the glass and the morphology exterior to the vessel, as a result of its temporal behavior. Automatic segmentation, that is, identification of the vessel and matter, for example, blood flowing through the vessel, can be done using an algorithm that automatically identifies the material, for example, blood based on temporal behavior of texture attributes formed by the pixels that constitute it. The temporal behavior that is extracted from the images can be used for several purposes. For example, temporary filtering can be performed for image enhancement, and detection of changes in pixel texture can be used for automatic identification of the lumen and its circumference. In all intravascular ultrasound images, the same catheter (and imaging device) is best removed from the image before stabilization or monitoring. Failure to remove the catheter could worsen stabilization and monitoring techniques. The elimination of the catheter can be done automatically since its dimensions are known. The present invention also provides automatic identification (ie, pairing or registration) of corresponding frames of two different intravascular ultrasound entrainment films from the same segment of a vessel, eg, pretreatment and post-treatment. To compare a first intravascular ultrasound traction film, ie, a first sequence of intravascular ultrasound imaging, with a second intravascular ultrasound traction film, ie, a second intravascular ultrasound imaging sequence, from the same segment of a body lumen, for example, the video capture, the digitized form or the film, the sequences of the image formation must be synchronized. Mating, which will achieve this synchronization, involves visualization approach operations between groups of consecutive images belonging to the two sets of sequences of intravascular ultrasound imaging. A group of consecutive images called the reference group is selected from each image formation sequence. This group should be chosen from a portion of the vessel visualized in both imaging sequences and should be a portion over which the therapy can not be performed since the morphology of the vessel may change due to therapy. Another condition for this mating process is that the two sequences of images are acquired at a known, constant and preferably equal traction regime. The approach operations are performed between the images of the reference group and the images of the second group having the same number of successive images extracted from the second sequence of the image formation. This second group of images is then changed by a single frame with respect to the reference group and the approach operations are repeated. This can be repeated for a previously determined number of times and the approach results of each frame change are compared to determine the maximum approach. The maximum closeness will determine the displacement of frames between the images of the two sequences of image formation. This displacement can be retracted in the first or second film so that the corresponding images can be automatically identified and / or viewed simultaneously. In this way, the corresponding images can be seen, for example, to determine the effectiveness of any therapy performed or a change in morphology over time. Additionally, the different types of stabilization discussed above can be implemented in or between the images in the two sequences, either before, during or after this mating operation. In this way, the two films can visualize not only synchronized but also in the same orientation and position with respect to each other.
Brief Description of the Drawings Figures 1 (a) and (b) show a two-dimensional array or matrix of an image accommodated in vectors digitized respectively in Cartesian and polar coordinates. Figure 2 illustrates the results of an evaluation of the displacement between two successive images in Cartesian coordinates. Figure 3 shows images illustrating the occurrence of deviation phenomena in polar and Cartesian coordinates. Figure 4 illustrates the effect of performing stabilization operations (Cartesian rotations and displacements) of an image.
Figure 5 illustrates the overall contraction and dilation of a body lumen expressed in the polar representation of the image and in the Cartesian representation of the image. Figure 6 shows an image divided into four sections for processing according to the present invention. Figure 7 shows a vessel, both in Cartesian and polar coordinates, in which local movement of the vessel has been detected. Figure 8 illustrates the results of monitoring the local movement of the vessel in graphic form, in a real coronary vessel. Figure 9 shows an electrocardiogram and a cross correlation coefficient plotted in a synchronized manner. Figure 10 shows a table of a group of values of cross-correlation coefficients (of the middle row) belonging to successive images (numbers from l to 10 shown in the top row) and results of internal cross-correlations (bottom row). Figure 11 shows a graph of a cross correlation coefficient indicating an aberrant particle in intravascular ultrasound images. Figure 12 shows intravascular ultrasound images divided into three basic parts: the lumen through which the fluid flows; the true vessel; and the surrounding tissue. Figure 13 illustrates the result of temporary filtration. Figure 14 shows an image of the results of the algorithm for automatic removal of the lumen. Figure 15 illustrates the time sequence of a first film (left column), the reference segment of the second film (middle column) and of the images of the first film corresponding (or matched) with the images of the segment of reference (right column).
Detailed description In intravascular ultrasound imaging systems (IVUS), ultrasonic signals are emitted and received by the ultrasonic apparatus, for example, a transducer or array of transducers, processed and eventually arranged as vectors that store the information digitized Each vector represents an ultrasonic response of a different angular sector of the body lumen. The amount of information elements in each vector (resolution of the axial sampling) and the number of vectors used to track the entire cross section (lateral sampling resolution) of the body's lumen depends on the specific system of intravascular ultrasound that is used. The digitized vectors are initially packaged in a two-dimensional array or matrix that is illustrated in Figure Ka). Generally, this matrix has what is known as polar coordinates, that is, coordinates A (r, d). The X axis of the matrix shown in Figure l (a) corresponds to the coordinate r while the Y axis of the matrix corresponds to the coordinate? . Each value of the matrix is generally a gray value, for example, which varies from 0-255 if it is 8 bits, which represents the strength of the ultrasonic signal to that corresponding location in the body lumen. This polar matrix can then be converted into the Cartesian matrix as shown in Figure 1 (b) having an X axis and a Y axis corresponding to the Cartesian representation of the cross section of the vessel. This image can be further processed and transferred to a visual display. The initial and display arrangement can use either polar or Cartesian coordinates. Values for the matrix may be different than gray values, for example, they can be color values or other values or they can have less or more than 8 bits. During a traction procedure of intravascular ultrasound imaging, the body lumen, hereinafter referred to as a vessel, and / or the imaging catheter may undergo various types of relative motion. These types of movement include: (1) Rotation in the plane of the image, that is, a displacement in the coordinate? of the polar image; (2) Cartesian displacement, that is, a change in the X and / or Y coordinates in the Cartesian image; (3) overall movement of the vessel, characterized by a radial contraction and the expansion of the entire vessel, that is, a uniform change in the coordinate r of the polar image, - (4) movement of the local vessel, characterized by radial contraction and the expansion of different parts of the vessel with different magnitudes and directions, that is, local changes in the coordinate r of the polar image; (5) Local movement, characterized by different movement of woven that varies depending on the exact place in the image; and (6) Through the movement of the plane, that is, movements that are perpendicular or almost perpendicular (angulation) to the plane of the image. The stabilization of successive new images is applicable to the first 5 types of movement described above because the movement is confined to the transverse plane. These types of movement can be compensated and stabilization can be achieved by transforming each current image so that its resemblance to its predecessor image is maximized. The first 3 types of movement can be stabilized using zoom operations that compare large or complete parts of the images with each other. This is because the movement is global or rigid by nature. The fourth and fifth types of movement are stabilized by applying approach operations on a localized base because different parts of the image exhibit different movement. The sixth type of movement can be stabilized only in part by applying approach operations on a localized basis. This happens because the movement is not confined to the transverse plane. This type of movement can be stabilized using the detection of cardiovascular periodicity. The following sections will describe methods for global stabilization, followed by the description of methods for local stabilization. The stabilization that uses the detection of cardiovascular periodicity will be described in the sections that mention the detection of periodicity. To achieve global stabilization, the change evaluation is performed using some type of approach operation. The approach operation measures the similarity between two images. The evaluation of the change is achieved by transforming one of the first images and measuring its proximity, that is, similarity, to its second predecessor image. The transformation can be achieved, for example, by changing the first complete image along an axis or combination of axes (X and / or Y in Cartesian coordinates or r and / or? In polar coordinates) by a single pixel (or more). As soon as the transformation is completed, ie the change, the first transformed image is compared with its second predecessor image using a previously defined function. This transformation is repeated, each time changing the additional pixel (or more) in the first image along the same and / or another axis and comparing the first transformed image with the second predecessor image using a previously defined function. After all the changes are evaluated, the place of the global end of the comparisons using the predefined function will indicate the direction and magnitude of the movement between the first image and its second predecessor image. For example, Figure 2 illustrates the results of an evaluation of changes between two successive images in Cartesian coordinates. Image A is a predecessor image showing a pattern, for example, a cross section of a vessel, the center of which is located in the lower right quadrant of the array. Image B is a current image that shows the same pattern but moved in an upward direction to the left and is located in the upper left quadrant of the matrix. The magnitude and direction of movement of the center of the vessel is indicated by the arrow. The lower part of the matrix is the matrix C (displacementX, displacement Y) which is the resulting matrix after performing displacement evaluations using some type of approach operation. There are several different algorithms or mathematical functions that can be used to perform approach operations. One of these is cross-correlation, possibly using Fourier transformation. This is where the current and predecessor images that each consist of, for example, 256 x 256 pixels, are each transformed by Fourier using the FFT algorithm. The FFT conjugate of the current image is multiplied by the FFT of the predecessor image. The result is inversely transformed by Fourier using the IFFT algorithm. The formula for cross-correlation using Fourier transformation can be shown as follows: C = real (ifft2 ((fft2 (A)) * conj (fft2 (B)))) where: A = matrix of the predecessor image (for example, 256x256); B = matrix of the current image (for example, 256x256); fft2 = two-dimensional FFT; ifft2 = two-dimensional inverse FFT; conj = conjugate; real = the real part of the complex expression; * = the multiplication of the element by element; and C = cross correlation matrix The approach evaluation using cross correlation that is implemented by the Fourier transformation is actually an approximation. This is because the mathematical formula for the Fourier transformation is related to periodic or infinite functions or matrices, while in real life the matrices (or images) are of finite size and not necessarily periodic. The method acquires periodicity in the two axes when cross correlations are implemented using FFT. As a result, this formula is a good approximation and reflects the real situation on the axis? of the polar representation of the image, however, does not reflect the actual situation on the r axis of the polar representation or of the X or Y axis of the Cartesian representation of the image. There are many advantages in cross-correlation using FFT. To begin with, all the values of the cross correlation matrix C (displacementX, displacementY) are calculated by this basic operation. Furthermore, there is hardware dedicated to the efficient implementation of the FFT operation, that is, Fourier transform chips or DSP boards. Another algorithm that can be used to perform approach operations is a direct cross-correlation, whether normalized or not. This is achieved by multiplying each pixel in the current displaced image and its corresponding pixel in the predecessor image and adding all the results and normalizing in the case of normalized cross-correlation. Each displacement results in a sum and the current displacement will be indicated by the largest sum of all the evaluated displacements. The formula for cross-correlation can be shown by the following formulas: C (displazX, desplazY) = B (x-desplazX, y-desplazY) * A (x, y) X, Y The formula for the normalized cross-correlation is C (displacementX, displacement) = B (x-displacexX, y-displacexY) * A (x, y) / X, Y And L (B (x-desplazX, y-desplazY) * B (x-desplazX, y-desplazY)) ¥ S (A (x, y) * A (x, y)) x-y where A = predecessor image matrix; B = current image matrix; * = multiplication of the pixel by the corresponding pixel; L = sum of all the pixels in matrix; C = matrix that stores the results of all trips made. Using this direct method of cross-correlation, C (displacementX, displacementY), all possible values of displacementX and displacementY can be evaluated. For example, if the original matrices, A and B, have 256 x 256 pixels each, then the values displacementX and displacementY, each varying from -127 to +128 will have to be evaluated, making a total of 256 x 256 = 65,536 evaluations of displacement so that C (displacementX, displacementY) are calculated for all possible values of displacementX and displacementY. After completing these evaluations, the global maximum of the matrix is determined. Direct cross correlation can be implemented more efficiently by reducing the number of arithmetic operations required. To detect the true displacement between images, it is not necessary to evaluate all the displacementX and displacementand possible. It is sufficient to find the place of the largest C (displacementX, displacementY) of all displacementsX and displacementY possible. A third algorithm that can be used to perform approach operations is the sum of the absolute differences (SAD). This is achieved by subtracting each pixel in an image of its corresponding pixel in the other image, taking its absolute values and adding all the results. Each displacement will result in the sum and the true displacement will be indicated by the lowest sum. The formula for the sum of the absolute differences (SAD) can be shown as follows: SAD = (A-B) absolute This formula can also be shown in the following way: C (displacementX, displacement) = abs (B (x-displacexX, y-displacexY) -A (x, y))? .y where: A = matrix of the predecessor image; B = matrix of the current image; abs = absolute value. - = subtraction of the element by the element; and L = sum of all the differences. Although the accuracy of each of these algorithms / formulas may vary slightly depending on the specific type of movement found and system placements, it can be understood that no single formula can, a priori be classified as providing the best and most accurate results. . Additionally, there are numerous variations in the formulas described above and other algorithms / formulas that can be used to perform displacement evaluation and which can be substituted by the algorithms / formulas described above. These algorithms / formulas may also include those operations known in the prior art for the use of mating operations. Referring again to Figure 2, if the approach operation carried out is cross correlation, then C (displacementX, displacementY) is called the cross correlation matrix and its global maximum (indicated by the black point in the upper left quadrant) is can locate distance and direction of the center of the matrix (arrow in matrix C) that is the same as that of the center of the glass in image B in relation to the center of the glass in image A (arrow in image B). If the approach operation performed is a sum of absolute differences, then the black spot will indicate the global minimum that will be located at a distance and direction from the center of the matrix (arrow in matrix C) that is the same as that of the center of the glass in image B relative to the center of the vessel in image A (arrow in image B). The rotary movement is expressed as a displacement along the current polar image in the coordinate? relative to its predecessor. The rotation offset in a current image is detected by maximizing the approach between the current polar image and its predecessor. The maximum approach will be obtained when the current image is reversibly changed by the exact magnitude of the actual displacement. For example in a 256 x 256 pixel image, the value of the difference (in pixels) between 128 and the coordinate? of the maximum in the cross-correlation image (minimum in the image of the sum of absolute differences), it will indicate the direction (positive or negative) and the magnitude of rotation. The overall movement of the vessel is characterized by the expansion and contraction of the entire cross section of the vessel. In the polar image this type of movement is expressed as inward and outward movement of the vessel along the r-axis. The movement of the vessel can be compensated for by performing the opposite action of vessel movement on a current polar image relative to its previous polar image using one of the formulas mentioned above or some other formulas. Unlike angular stabilization, the stabilization of vessel movement does not change the orientation of the image but actually transforms the image by stretching or compressing it. The Cartesian displacement is expressed as a displacement on the X axis and / or on the Y axis in the Cartesian image in relation to its predecessor. This type of movement is eliminated by moving the Cartesian image in the opposite direction to the real displacement. Thus, the Cartesian displacement, in the Cartesian representation, can be achieved by essentially the same arithmetic operations used for the rotating stabilization and the movement of the vessel in the polar representation. The number of displacement evaluations needed to locate the global endpoint (maximum or minimum, depending on the approach function) or C (displacementX, displacementY) can be reduced using several calculation techniques. A technique, for example, takes advantage of the fact that the movement between successive images of intravascular ultrasound is, in general, relatively slow in relation to the total dimensions of the polar and / or Cartesian matrices. This means that C (displacementX, displacementY) can only be evaluated in a relatively small portion around the center of the matrix, that is, around the displacementX = 0, displacement Y = 0. The end of that portion is ensured to be the global end of the matrix C (displacementX, displacementY) including larger values of displacementX and displacementY. The size of the minimum portion which will ensure that the end detected within it is truly a global end varies depending on the framework of the system. The number of the necessary evaluation operations can be further reduced based on the expected uniformity and monotony property of the C matrix (especially in the vicinity of the global end). Therefore, if the value in the matrix C (displacementX, displacementY) in a certain place is a local endpoint (for example, in a neighborhood of 5 x 5 pixels), then it is probably the global end of the entire matrix C ( displacementX, displacementY). The implementation of this reduction in the number of necessary evaluations can be carried out by first looking for the center of the matrix (displacement X = 0, displacement Y = 0) and verifying a small neighborhood, for example 5 x 5 pixels around the center. If it is found that the local end is inside this neighborhood then it is probably the global end of the total matrix C (displacementX, displacementY) and the search can be terminated. If, however, the local end is found at the edges of this neighborhood, for example, displacement X = -2, displacement X = 2, displacement Y = -2 or displacement Y = 2, then repeat the search around this pixel until find a value of C (displacementX, displacementY) that is greater (less) than all of its close neighbors. Due to the fact that in a large number of images there is no movement inside the image, the number of evaluations needed to locate the global endpoint in those cases will be approximately 5 x 5 = 25 instead of the original 65,536 evaluations. The number of necessary evaluation operations can also be reduced by sampling the images. For example, if 256 x 356 images are sampled for every second pixel, then they are reduced to arrays of 128 x 128. In this case, the direct cross-correlation or sum of absolute differences between these matrices involves 128 x 128 operations instead of 256x256 operations, each time the images move one in relation to the other. Sampling, as a reduction method for displacement assessment operations, can be interspersed with other methods described above for reduction. Referring again to Figure 2, as a result of the approach operation, the indicated displacement X will have a positive value and the displacement Y a negative value. In order to stabilize Image B, that is, to compensate for the displacements in the X and Y directions, the logic of the displacements will invert the displacements, that is, change its sign but not its magnitude, and implement these displacements in the matrix that corresponds to Image B. This will artificially invert the displacement in Image B and will cause Image B not to be displaced with respect to Image A. The actual values used in the approach calculations are not necessarily the original values of the matrix as it is supplied by the image formation system. For example, improved results can be achieved when the original values are raised to power 2, 3 or 4 or processed by some other method. The imaging catheter and the cover that covers it appear as aberrant particles constant in all intravascular ultrasound images. This characteristic obscures the approach operations performed between images since it is not a part of the morphology of the vessel. Thus, it is necessary to eliminate the catheter and the associated objects of each image before performing the approach operations, that is, their pixels are assigned a value of zero. The removal of these objects from the image can be done automatically since the dimensions of the catheter are known. The evaluation and implementation of the displacement can be modular. Thus, the evaluation of the displacement and the implementation can be limited either to the polar coordinates or to the Cartesian coordinates individually, or the evaluation and implementation of the displacement can be implemented sequentially for the polar and Cartesian coordinates. Currently, because the imaging of intravascular ultrasound systems is usually organized first using polar coordinates and then converted into Cartesian coordinates, it is more convenient to perform evaluation and implementation of the displacements in the same sequence. However, the sequence can be modified or changed without any effect or negative result. The displacement evaluation process can be carried out along one or two axes. In general, a bidimensional displacement evaluation is preferred even when the movement is directed along an axis. The displacement implementation can be limited to both axes, an axis or no axis. There is no necessary identification between the area in the image used for the evaluation of the displacement and between the area in which the implementation of the displacement is made. For example, the evaluation of the displacement can be done using a relatively small area in the image while the displacement implementation will displace the total image according to the displacement indicated by this area. A trivial displacement logic is one in which the displacement implemented in each image (thus forming a stabilized image) has an equal magnitude, and in the opposite direction, to the evaluated displacement. However, this logic can be the result of a process defined as deviation. Deviation is a process in which the displacements implemented accumulate and produce an increasing displacement whose dimensions are significant in relation to the whole image or visualization. The deviation may be the result of an inaccurate evaluation of the displacement or of a non-transverse inter-imaging movement in some part of the cardiovascular cycle. When Cartesian stabilization is implemented, the deviation may cause, for example, the displacement of a relatively large part of the image outside of the visual display. When rotating stabilization is implemented, the deflection can cause the image to rotate in a certain direction. Figure 3 is an image that illustrates the occurrence of drag in the polar and Cartesian coordinates. The image on the left is the original visual display of the image while the image on the image is the same after polar and Cartesian stabilization has been made. Notice how the right image is rotated counter-clockwise at a large angle and moves downward relative to the left image. In this case, the implementation of the rotary and Cartesian displacement does not compensate real displacements in the image, but arises from the imprecise evaluation of the displacement. The logic of displacement must be able to handle this deviation so that there is a minimum implementation of wrongly evaluated displacements. One method to avoid, or at least limit, the deviation is by setting a limit to the magnitude of the allowable offsets. This will minimize the deviation, but at the cost of not compensating for any real displacement. Additional methods can be used to avoid or minimize displacement. These can possibly be interspersed with detection methods of the cardiovascular periodicity discussed later. The images shown in Figure 4 illustrate the effect of performing stabilization operations (Cartesian rotations and displacements) on an image. The left image is an intravascular ultrasound image of a coronary artery as it would look in a large portion of a regular visual display (with the catheter removed) while the right image shows how the left image would be displayed visually after the images are deployed. stabilization operations. Taking a close view in the left and right images of Figure 4, you can observe certain differences. First, the right image is slightly rotated in a clockwise direction (ie, by a few degrees) in relation to the left image. This is the result of rotating stabilization. Next, the right image moves in a general left direction in relation to the left image. This can be detected by noting the distance of the lumen (cavity) from the edges of the representation in each image. This is a result of the stabilization operations of Cartesian displacement. The advantages of the stabilization of the displayed image can not be appreciated by viewing images alone as shown in Figure 4. However, viewing a film of these images would easily illustrate the advantages. In a visual display that does not include stabilization, the catheter site would always be located in the center of the deployment and the morphological features would move around and rotate in the visual display. In contrast, in a stabilized visual display, the place of the catheter would move around while the morphological characteristics would remain basically stationary. Stabilization does not necessarily have to be exhibited in a real visual display. It may be invisible to the user in the sense that stabilization will improve the subsequent processing steps, but the actual visual display will display the resulting processed images in their original position and orientation (not stabilized). Figure 5 illustrates the overall contraction or dilation of a vessel, expressed in the polar representation of the image as a movement of the features along the coordinates r, that is, the movement along the polar vectors. Figure 5 also shows the same global contraction or dilation expressed in the Cartesian representation of the image. Figure 5 (a) shows the appearance of the baseline of the cross section of a vessel in both polar and Cartesian representation. Figure 5 (b) shows a contraction relative to the baseline of the vessel, Figure 5 (c) shows a uniform expansion relative to the baseline of the vessel. Since the movement of the global vessel is expressed as a uniform change in the vessel's caliber, any convenient operations for stabilization in the polar representation can be used to assess the movement of the global vessel, for example, it can be assessed by an operation of approach using the whole polar image. After an evaluation of the two-dimensional displacement is carried out, as previously mentioned, the place of the maximum in the matrix C (displacementX, displacementY) in the axis? It is used for rotary stabilization. This leaves the end place on the r-axis, which can be used as an indication of the movement of the global vessel. Thus the supervision of the movement of the global vessel is a by-product of the evaluation of two-dimensional displacement in the polar image. Each pair of successive images produces a value that indicates the movement of the vessel. Both the magnitude and the sign of the resulting displacement between the images characterize the displacement in the vessel, that is, the movement of the vessel. Negative displacements indicate dilation, and positive displacements indicate contraction. The magnitude of the value indicates the magnitude of the change in vessel movement. Under certain circumstances the movement or movement of the vessel may not be uniform / rigid even though it is confined to the plane of the image, that is, transverse. To determine the type of movement, or movement of the vessel, the image can be divided into sections and perform the evaluation of global stabilization in each of these sections. By examining the indicated displacements of these sections in relation to the corresponding sections in the predecessor image, a determination of the type of movement can be made. For example, as shown in Figure 6, the image in Figure 6 (a) can be divided into four sections as shown in Figure 6 (b). The evaluation of the displacement can be carried out separately in each of the four sections. The comparison between the results of the displacement evaluation for each of the four sections can possibly identify the actual type of movement. Thus, the type of stabilization applied can be varied depending on the type of movement detected. Stabilization for local movement is achieved by approaching operations on a localized basis. Small portions of the predecessor image A ("temperate" regions) and small portions of the current image B ("searched" regions) participate in the local stabilization process. Sometimes, it is better to perform local stabilization after the overall stabilization has been carried out. During local stabilization, the temperate regions in the predecessor image (A) are shifted within searched regions and compared, using zoom operations to the temperate size regions in the current image (B). To each pixel, in the stabilized image formed (again) (B ') a new value will be assigned based on the results of the search and the approach evaluation performed. The local stabilization is illustrated by the following example in which the temperate region is a region of a pixel ll, that is, a single pixel, the search region is a region of 3 x 3 pixels and the approach operation is a sum of absolute differences. In the following diagram, the pixel valued 3 in A and the valued pixel 9 in B are the corresponding pixels. The neighborhood of the 3 x 3 pixel of the valued pixel 9 is also illustrated.
Pixel in A Pixels in B B ("temperate" region) ("search" region 3x3) I 10 10 3 7 9 50 1 II 7 60 In this example, according to the conditions described above the valued "tempered" pixel 3 is compared using sum of absolute differences with all the pixels found in the 3x3 search region around the valued pixel 9. The valued pixel 1 in the upper corner The left of the search region will reach the minimum value of the sum of absolute differences (¡-3¡ = 2) of all the possibilities in the search region. As a result, in the newly formed stabilized image (B ') (to the corresponding pixel instead of the valued pixels 3 and 9 will be assigned the value of 1. In general, the tempering dimensions and the search region may vary together with the approach operations used The actual value that is assigned to the pixel of the newly formed stabilized image (B1) does not necessarily need to be a real pixel value of the current image B (as illustrated in the example) but some function of pixel values It is important to note that as a result of local stabilization, as opposed to global / rigid methods, the "composition" of the image, that is, the internal relationship between the pixels and their distribution in the stabilized image, changes in relation to the original image Local stabilization can be implemented in both the polar and Cartesian representation of the image Figure 7 shows a vessel, both in Cartesian coordinates c omo polar, in which movement of the local vessel was detected. When local vessel movement is detected, it is an indication that some parts of the cross section of the vessel are behaving differently than other parts of the cross section.
Figure 7 (a) shows a figure of the baseline of the vessel before the local movement of the vessel. Figure 7 (b) shows an example of the movement of the local vessel. As indicated in both Cartesian and polar representations, four different parts of the vessel behave differently: two vessel segments do not change gauge, or do not move in relation to their corresponding segments in the predecessor image; a segment contracts, or moves up; and a segment expands, or moves down. As can be seen, the methods of evaluating global vessel movements are not adequate to evaluate the movement of the local vessel because the vessel does not behave in a uniform manner. If global vessel motion evaluation is to be applied, for example, in the example shown in Figure 7, zero overall vessel movement could be detected, i.e. shrinkage and dilation can cancel each other out. Therefore, local cup movement evaluation methods should be used. This can be achieved by evaluating separately the movement of the vessel in each polar vector, ie in each vector? (or Y), Approach operations are applied using displacements in one dimension in the corresponding polar vectors. For example, if the cross-correlated approach is used, then the following operation illustrates how this is accomplished using displacements in one dimension.
C (displacexX, Y) = L B (x-desplazX, y) * A (x, y) where: A = predecessor image matrix B = current image matrix; * multiplication of the pixel by the corresponding pixel; S = sum of pixels in the polar vector matrix; C = two-dimensional matrix of correlation coefficient.
As can be seen, the displacement is carried out along an axis (X axis or r) for each and every one of the polar vectors (? Or vector Y). The values assigned in each vector for the evaluation of the displacement may not be the actual values of the images but, for example, each pixel in the vector can be assigned the average of its lateral neighbors, that is, A (X, Y ) will be assigned, for example, the average of A (X, Y-1), A (X, Y) and A (X, Y + 1). The same goes for B (displacementX, Y). This can make the process of cross-correlation to noise more robust). A two-dimensional matrix is formed (C (displacementX, Y)). Each column in the matrix stores the results of the approach / similarity operations performed between the corresponding polar vectors from the current image and the predecessor image. This operation could also be implemented using FFT. After matrix formation, the endpoint (the maximum in cross-correlation operation) is detected in each column. This extreme place indicates the mating between the current polar vector and its predecessor. Thus, the movement of the vessel in each vector, that is, the radial movement in each specific angular sector of the vessel, can be characterized. This information can be used to visualize the movement of the local vessel, some or all of the polar vectors can be added and averaged to determine an average value for vessel movement, or it can be used for other purposes. Therefore, by evaluating the movement of the local vessel, the movement of the vessel, both local and global, can be evaluated. To use effectively and / or express as physiological parameters, the magnitude of vessel movement must be related in some way to the actual caliber of the vessel. Thus, measurements of vessel movement monitoring should generally be used in conjunction with automatic or manual measurements of vessel size. In addition to the true movement of the vessel, Cartesian displacement can also be detected as vessel movement. This is because the Cartesian displacement, when expressed in polar coordinates, results in displacements along both axes r and? . To distinguish the true movement of the vessel from Cartesian displacement, the evaluation of the displacement in the Cartesian image must indicate lack of movement or very little movement. If Cartesian displacement is detected, then it must first be stabilized. After that, the Cartesian coordinates can be converted again into polar coordinates for the evaluation of the movement of the vessel. This allows greater success and provides more accurate results when the movement of the real vessel is determined. The graphs in Figure 8 illustrate the results of monitoring the movement of the vessel in a live human coronary vessel. Supervision of local vessel movement was performed twice in approximately the same vessel segment, and consisted of 190 successive images as shown (X axis) in Figures 8 (a) and 8 (b). The difference between the two graphs is that the evaluation of the movement of the vessel shown in Figure 8 (a) was carried out before the treatment of the artery, that is, before the intervention, while the evaluation of the movement of the vessel shown in the Figure 8 (b) was performed after the treatment of the artery, that is, after the intervention. In every image, the movement of the vessel was assessed locally in each polar vector and then all detected individual displacements were added and averaged to produce a single indication of global vessel movement (Y axis) for each image, ie an indication of vessel movement activity. The units on the Y axis have no direct physiological significance because the actual caliber of the vessel was not calculated, but the relationship between the values in Figures 8 (a) and 8 (b) have a meaning because they were extracted from the same vessel. Thus, important information can be derived from these figures. Notice how the movement of the vessel increases after the treatment (maximum vessel movement from about 40 to about 150). Therefore, even when vessel movement was not completely calculated, a change in physiology has been demonstrated (probably linked to treatment). Cardiovascular periodicity can only be monitored based on information stored in intravascular ultrasound images, eliminating by this the need for electrocardiograms or other external signal. This means that a relationship can be established between each image and its respective time phase in the cardiovascular cycle without the need for an external signal. As soon as the link is established, then monitoring can replace the electrocardiogram signal in a large number of services that require heart pulse selection. This monitoring can be carried out using approach operations between successive images, moreover, the same approach operations can produce information regarding the quality of intravascular ultrasound images and their behavior. The cardiac cycle manifests itself in the cyclic behavior of certain parameters that are extracted by intravascular ultrasound images. If the behavior of these parameters is monitored, then the cardiac cycle periodicity can be determined. Knowing the frame acquisition regime will also allow the determination of the cardiovascular cycle as a temporary amount. The approximation between successive intravascular ultrasound images is a parameter that clearly behaves with a periodic pattern. This is a result of the periodicity of most types of inter-image movement that occur. An approach function can be formed in which each value is the result of an approach operation.
An approach function can be formed in which each value is the result of an approach operation between a pair of successive images. For example, a set of ten images will produce nine successive zoom values. The approach function can be derived from a cross-correlation type operation, sum operation of absolute differences or any other type of operations that produce a type of approach function. Normalized cross-correlation produces very good results when used to monitor periodicity. The following formulas show the formula for the cross-correlation coefficient (as a function of the n-th image) to calculate the approach function: Correlation function (N) = B (x, y) * A (x, y) /? -y M? A (x, y) 2 *? B (x, y) 2)?, And?, And where: The correlation function (N) = one-dimensional function that produces a value for each pair of images; A = predecessor image matrix (the nth image); B = current image matrix (image n + 1-th); * = pixel multiplication by the corresponding pixel; L = sum of all the pixels in the matrix. The correlation coefficient is a secondary product of the stabilization process, because the central value (displacement X = 0, displacement Y = 0) of the normalized cross correlation matrix (C (displacementX, displacementY)) is always calculated. This remains true for all types of approach functions used for stabilization. The central value of the approach matrix (C (displacementX = 0, displacementY = 0)), either cross correlation or another type of operation used for stabilization, can always be used to produce an approach function. The approach function can also be calculated from images that are displaced one in relation to the other, that is, the value used to form the function is C (displacementX, displacementY) where the displacementX and the displacementY are not equal to zero. The approach function does not necessarily need to be formed from complete images but can also be calculated from parts of images, either corresponding or displaced in relation to one another. Figure 9 shows an electrocardiogram and the cross correlation coefficient plotted synchronously. Both curves are related to the same set of images. Figure 9 (a) shows a graph of the electrocardiogram signal and Figure 9 (b) shows a graph of the cross correlation coefficient derived from successive images of intravascular ultrasound. The horizontal axis displays the image number (a total of 190 successive images). As can be seen, the function of the cross correlation coefficient in Figure 9 (b) shows a periodic pattern, and its periodicity is the same as that visualized by the electrocardiogram signal in Figure 9 (a) (both show approximately six heartbeats) . Monitoring the periodicity of the approach function can be complicated because the approach function does not have a typical shape, can vary over time, depends on the type of approach function used, and can vary from segment of vessel to vessel segment and subject to subject. To monitor the periodicity of the approach function continuously and automatically a variety of methods can be employed. One method, for example, is a threshold type method. This method monitors a value of the approach function over a certain value known as a threshold. As soon as this value is detected, the method monitors whether the threshold crosses again. The period is determined as the difference in time between the threshold crusades. An example of this method is shown in Figure 10 as a table. The table shows a group of cross correlation coefficient values (middle row) that belong to successive images (numbers from 1 to 10 shown in the top row). If the threshold, for example it is set at the value 0.885, then this threshold is crossed first in the passage from image # 2 to image # 3. The threshold intersects a second time in the passage from image # 6 to image # 7. Thus, the time period of the periodicity is the time taken to acquire 7-3 = 4 images. Another method that can be used to extract the cardiac periodicity from the approach curve is the internal cross-correlation. This method uses a segment of the approach function, that is, a group of successive values. For example, in the table shown in Figure 10, the segment may be formed from the first four successive images, ie, images # 1 through # 4. As soon as a segment is chosen, it is cross-correlated with it, producing a cross-correlation value of 1. Next, this segment is cross-correlated with a segment of the same size extracted from the approach function, but displaced an image forward. This is repeated, with the segment displaced two images forward, and so on. In the example shown in Figure 10, the segment. { 0.8, 0.83, 0.89, 0.85} they would be cross-correlated with a segment displaced by an image. { 0.83, 0.89. 0.85, 0.82} , then the segment. { 0.8, 0.83, 0.89, 0.85} it would be cross-correlated with a segment displaced in two images. { 0.89. 0.85, 0.82, 0.87} , and so on. The lower row of the table in Figure 10 shows the results of these internal cross correlations. The first value of 1 is a result of the cross-correlation of the segment with itself. These cross-correlation values are examined to determine the place of the local maximum. In this example, it is located in image number 1 and image number 5 (its values are shown in bold). The resulting periodicity is the difference between the place of the local maximum and the place from which the search was initiated (ie, image # 1). In this example, periodicity is the time that passes from the acquisition of image # 1 to image # 5, which is 5-1 = 4 images. As soon as a period has been detected, the search begins again using a segment surrounding the local maximum, for example, image # 5. In this example, for example, the new segment could be the group of approach values that belong to images # 4 through # 7. Due to the nature of the type of calculations involved, the internal cross-correlation operation at a certain point in time requires the zoom values of the images acquired at a future time. Thus, unlike the threshold method, the approach method requires the storage of images (in memory) and the detection of periodicity is done retrospectively. The cardiac periodicity can also be monitored by transforming the approach curve into the temporal frequency domain by Fourier transformation. In the frequency domain, the periodicity should be expressed as a peak corresponding to the periodicity. This peak can be detected using spectral analysis. The approach function can provide additional important information about intravascular ultrasound images that can not be extracted from external signals, such as an electrocardiogram, that are not derived from real images. The behavior of this function can indicate certain states in the intravascular ultrasound images or in the parts of the image used to form the approach function. Important features in the approach function that are indicative of the status of intravascular ultrasound images are the presence of periodicity and the "roughness" of the approach function. Normal intravascular ultrasound images should exhibit a relatively uniform and periodic approach function as visualized, for example, in Figure 9 (b). Nevertheless, if there are no "roughness" and / or periodicity then this could indicate some problem in the formation of intravascular ultrasound images, that is, the presence of an artifact in the formation of images caused by, for example, either a failure mechanical or electronic. The following figure helps illustrate this. Figure 11 shows a graph of the cross correlation coefficient derived from successive images of intravascular ultrasound. This graph is analogous, in its formation to the graph of the cross-correlation in Figure 9 (b), but in this example it is formed by a different imaging catheter used in a different subject. In this example, it is clear that the approach function does not exhibit a clear periodicity nor does it have a uniform appearance but a rugged or peaked appearance. In this case, the behavior of the approach graph was caused by the lack of uniformity of the rotation of the intravascular ultrasound transducer responsible for emitting / collecting the ultrasonic signals visualized in the image. This type of aberrant particle sometimes appears in intravascular ultrasound catheter-transducer assemblies in which there are mechanical parts that move. The approach function, when considered to reflect normal imaging conditions, may serve another purpose. This is linked to the location of the maximum in each cycle of the approach function. Locating these maxima can be important for image processing algorithms which process several successive images together. The images found near the maximum images tend to have a lot of closeness and little inter-image movement, one in relation to the other. Additionally, if it is required to select images belonging to the same phase of successive cardiac cycles, it is usually best to select them using the maximums (of the approach function) in each cycle. In a visualization method, for example, these images are projected onto the visual display and the gaps are filled by interpolated images. All types of periodic movement can be stabilized by this visualization method. The stage of the logic of displacement in the stabilization process can also make use of the supervision of cardiovascular periodicity. If the deviation is to be avoided, the cumulative displacement after each (simple) cardiac cycle can be small or zero, that is, the sum of all the displacements over a period of a cycle must be zero or close to zero. This means that the deviation phenomena can be limited using a displacement logic which is coupled to the periodicity monitoring. Referring now to Figure 12, most intravascular ultrasound images can be divided into three basic parts. The central area (around the catheter), labeled Lumen in Figure 12, is the actual lumen or interior passage (cavity) through which fluid flows, for example, blood. Around the lumen, there is the real vessel, labeled Vessel in Figure 12, composed of several layers of tissue and plaque (if ill). Surrounding the vessel is another tissue labeled Exterior in Figure 12, ie muscle or organ tissue, for example the heart in the image of the coronary vessel. When intravascular ultrasound images are viewed dynamically (ie, in film format), the display of the interior, where the blood flows, and the exterior surrounding the vessel, usually shows a different temporal behavior than the vessel itself. Automatically monitor the temporal behavior of the pixels in the temporary intravascular ultrasound image would allow to use the information extracted by the process to help in the interpretation of the intravascular ultrasound images. This information can be used to increase intravascular ultrasound deployments by filtering and suppressing the appearance of rapid change characteristics, like the fluid, for example, the blood, and the surrounding tissue, on account of its temporal behavior. This information can also be used for automatic segmentation, to determine the size of the lumen by automatically identifying the fluid, for example, the blood and the surrounding tissue based on the temporal behavior of the texture attributes formed by their component pixels.
To carry out the automatic monitoring of the temporal behavior there must be an evaluation of the relationship between the attributes formed by the corresponding pixels belonging to successive images. The extraction of temporal behavior is similar to the methods used for approach operations on a localized basis, as previously described. Large temporal displacements are characterized by relatively large gray value shifts of the corresponding pixels, when moving from one image to the next. These rapid temporal displacements can be suppressed in the visual display by expressing these displacements through the formation of a mask that multiplies the original image. This mask reflects temporal displacements in the pixel values. A problem that arises in this evaluation is to determine if the changes of the gray value in the corresponding pixel values are due to the flow or the change in matter, or to movements of the vessel / catheter. Performing this evaluation on stabilized images exceeds, or at least minimizes, this problem. The following definitions apply: B = current image (stabilized or non-stabilized); A = predecessor image (stabilized or non-stabilized); C = successor image (stabilized or non-stabilized); abs = absolute value. The matrices used can be either Cartesian or polar. The following operation, which results in a Di matrix, should be defined as follows: Di is a matrix, in which each pixel with coordinates X, Y is the sum of the absolute differences of its small surrounding neighborhood, for example, 9 elements (X -2: X + 2, Y-2: Y + 2 - a square of 3x3), extracted from the images of A and B, respectively. For example, the following illustration shows the corresponding pixels (in bold) and their nearby neighborhoods in matrices A and B.
A B DI 1 4 51 3 6 8 6 7 15 3 4 70 190 3 5 83 2 1 6 The pixel in the matrix DI, with the place corresponding to the pixels with value 4 (in B) and 7 (in A), will be assigned the following value: abs (1-3) + abs (4-6) + abs (51-8) + abs (6-3) + abs (7-4) + abs (15-70) + abs (3-2) + abs (5-l) + abs (83-6) = 190 D2 is defined similarly but for matrices B and C. Di and D2 are, in effect, difference matrices that are averaged using the 3x3 neighborhood in order to decrease local fluctuations or noise. Large changes in gray values between images A and B or between B and C will be expressed as relatively high values in matrices DI and D2 respectively. A new matrix, Dmax is formed immediately, in which each pixel is the maximum of the corresponding pixels in matrices DI and D2: Dmax = max (Dl, D2) where: max (Dl, D2) = each pixel in Dmax maintains the highest of the two corresponding pixels in DI and D2. Thus, the single matrix Dmax particularly highlights the changes of the largest pixel between the matrices A, B and C. A mask matrix (MD) is then formed from the Dmax by normalization, that is, each pixel in Dmax is divides between the maximum value of Dmax. Therefore, the pixel values of the MD mask vary from zero to one. The role of the mask is to multiply the current B image in the following way, forming a new matrix or image defined as BOUT: BOUT = (l-MDn) * B where: B = current original image; BOUT = the new image, -n = each pixel in the MD matrix is raised to the power of n. The n is generally a number with a value, for example, of 2 -10; l-MDn = a matrix in which each pixel value is one minus the value of the corresponding pixel in MD. By performing the l-MDn subtraction, the small values of MD that reflect slow-moving characteristics become high values in l-MDn. Moreover, the possibility that only the slow change characteristics will have high values is increased because of the previous increase of the high MD values (forming MD as a maximum between the matrices Di and D2). Multiplication of the mask (l-MDn) by the image Current B, forms a new BOUT image in which the appearance of slow-moving pixels increases as the values of the fast-moving pixels decrease. The number n determines how strong the removal of fast change characteristics in the deployment will look. Figure 13 illustrates the results of the temporary filtration. The image on the left is an original intravascular ultrasound image (ie, B matrix) of a coronary vessel, as would be seen in the current deployment. The image on the right followed the processing steps described above, that is, temporary filtering (BOUT matrix). Note that in the image on the right, the blood and surrounding tissue are filtered (suppressed) and the lumen and vessel boundaries are easier to identify. Automatic segmentation differentiates the fluid, for example, blood and exterior, from the vessel wall based on the differences between the temporal behavior of a texture quality. As in the case of temporary filtering, this method is derived from the relationship between the corresponding pixels of numerous successive images. If the pixel values change due to inter-image movement, then the realization of the algorithm will be degraded. Stabilization of the performance prior to automatic segmentation will outperform, or at least minimize, this problem. As in the case of temporary filtration, the following definitions will apply: B = current image (stabilized or non-stabilized), -A = predecessor image (stabilized or non-stabilized); C = successor image (stabilized or not stabilized). The matrices can be in cartesian or polar form. The quality of the texture can be defined as follows: Assume that the four closest neighborhoods of a pixel with value "a" are "b", "c", "d" and "e", then the classification of "a" will depend on its relations with "b", "c" "," d "and" e ". This can be shown with the following illustration: b c a d e Now you can form the following categories: In the vertical direction: if a > b and a > and then "a" is classified as belonging to category I; yes a > b and a < and then "a" is classified as belonging to category II; yes to < b and a < and then "a" is classified as belonging to category III; yes to < b and a > and then "a" is classified as belonging to category IV; if a = b and a = e then "a" is classified as belonging to category V; In the horizontal direction: if a > c and a > d then "a" is classified as belonging to category I; yes a > c and a < d then "a" is classified as belonging to category II; yes to < c and a < d then "a" is classified as belonging to category III; yes to < c and a > d then "a" is classified as belonging to category IV; if a = b and a = e then "a" is classified as belonging to category V; The vertical and horizontal categories are then combined to form a new category. As a result, pixel "a" can now belong to 5 x 5 = 25 possible categories. This means that the texture quality of "a" is characterized by belonging to one of those (25) categories. For example, in the following neighborhood: 7 10 10 14 3 The pixel "a" = 10 is classified as belonging to the category that includes the vertical category I (because 10> 7 and 10> 3) and the category V horizontal (because 10 = 10). However, if pixel "a" had been located in the following neighborhood: 7 11 10 14 3 it would have been classified as belonging to a different category because its horizontal category is now category III (10 <11 and 10 <14 ).
By determining the relationship of each pixel with its close neighborhood a texture quality is formed that classifies each pixel into 25 possible categories. The number of categories can vary (increase or decrease), that is, for example, changing the conditions of categorization, such as the nearby neighborhoods usedFor example, instead of four, eight nearby neighborhoods can be used. The basic concept by which texture changes are used to differentiate fluids, eg, blood, from vessels is by monitoring the change in the corresponding pixel categories in successive images. To achieve this, the category is determined in each and in all the pixels in matrices A, B and C. Then, each of the corresponding pixels is tested to see if this category changed. If it changed, the pixel is suspected to be a fluid, for example blood, or a pixel of surrounding tissue. If it did not change, then the pixel is suspected to be a glass pixel. The following example shows three corresponding pixels (with values of 8, 12 and 14) and their neighborhoods in successive matrices A, B and C.
A B C 5 9 1 9 8 11 19 1122 1133 2211 1144 17 23 100 20 In this example, the category of the valued pixel 12 (in B) is the same as in A and C, so it will be classified as a pixel with a higher chance of being a glass wall pixel. If, however, the situation was as shown below (20 in C changes to 13): A B C 5 9 1 9 8 11 19 12 13 21 14 17 23 100 13 then the pixels 8 in A and 12 in B have the same categories, but 14 in C have a different category as in the previous example. As a result, pixel 12 in B will be classified as a pixel with a higher chance of being a fluid (lumen), ie blood, or a pixel of outer fabric. The classification method describes monitors up to now monitors that change the texture or pattern associated with the small neighborhood around each pixel. Once this change is determined as described above, each pixel can be assigned a binary value. For example, a value of 0, if it is suspected that it is a glass pixel, or a value of 1, if it is suspected to be a pixel of blood or a pixel that belongs to the exterior of the glass. The binary image serves as input for the process of identifying the lumen and the original pixel values stop playing a role in the segmentation process. The identification of the lumen using the binary image is based on two assumptions that are generally valid in the intravascular ultrasound images processed in the manner described above. First of all, the areas in the image that contain blood or that are on the outside of the vessel are characterized by a high pixel density with a binary value of 1 (or a low pixel density with a value of zero). The term density is needed because there are always misclassified pixels. The second assumption is that from a morphological point of view, connected areas of high pixel density with the value of l (lumen) should be located around the catheter and surrounded by connected areas of low pixel density with the value of 1 (vessel) which are, in turn, surrounded again by connected areas of high pixel density with the value of l (outside of the vessels). The reason for this assumption is the typical morphological arrangement expected from a blood vessel. These two assumptions form the basis of the subsequent processing algorithm which extracts the real area associated with the lumen outside the binary image. This algorithm can use known image processing techniques, such as establishing a threshold for the density characteristic in localized regions (to distinguish blood / outside of the vessel) and morphological operators such as dilation or link to interconnect and form a connected region. which should represent the real lumen found within the boundaries of the walls of the vessel. Figure 14 shows an image of the results of the algorithm for automatic removal of the lumen. The image is an original image of intravascular ultrasound (for example, as described above as image B) and. the edges of the lumen are superimposed (by the algorithm) as a bright line. The algorithm for the extraction of the edges of the lumen was based on the supervision of the change in the quality of the texture described above, using three successive images. The above-described examples of temporal filtering and automatic segmentation include the use of two additional images (e.g., as described above as images A and C) in addition to the current image (e.g., as described above as image B). ). However, both of these methods can be modified to use less (that is, only one additional image) or more additional images. The execution of the two methods described above will be greatly improved if combined with cardiovascular periodicity monitoring. This applies, in particular, to successive images in which the monitoring of cardiovascular periodicity produces high values of approach between images. These images usually have no movement between images. A) Yes, the most reliable results can be expected when they are fed as data entries successive images with maximum approach between images either for temporary filtering or automatic segmentation. During the treatment of vessels using catheterization, it is common practice to repeat intravascular ultrasound traction examinations in the same vessel segment. For example, a typical situation is to first review the segment in question to assess the disease (if any), remove the intravascular ultrasound catheter, consider therapy options, perform the therapy and then immediately after (during the same session) examine the segment treated again using intravascular ultrasound in order to assess the results of the therapy. In order to adequately assess the results of this therapy, the corresponding segments prior to the treatment and the post-treatment segments remaining in the same positions along the length of the vessel, that is, the corresponding segments, should be compared. The following method provides the pairing, that is, the automatic identification (registration) of the corresponding segments.
To achieve the matching of the corresponding segments, approaching / similarity operations are applied between the images belonging to a first group of successive images, that is, a reference segment, of a first traction film and images belonging to a second group of successive images of a second traction film. The pairing of the reference segment in the first movie with its corresponding segment in the second movie is obtained when some criteria function is maximized. A segment is selected from one of the two movies. The reference segment can be a group of successive images representing, for example, a few seconds of film of an intravascular ultrasound image. It is important to select the reference segment of a position in a vessel that is present in the two films and that has not changed as a result of any procedure, that is, the reference segment is close or distal to the treated segment. As an example, the table in Figure 15 will help to clarify the matching method of the corresponding segments. The left column shows the time sequence of the first movie, in this case the film consists of twenty successive images. The central column shows the reference segment that is selected from the second film and consists of 10 successive images. The column on the right lists the 10 successive images of the first movie (# 5 - # 14) that really correspond (or pair) with the images of the reference segment of the second movie (# 1 - # 10). The purpose of the mating process is to really reveal this correspondence. Once a reference segment is chosen, one image (or more) is displaced along the other film each time, and a set of stabilization and zoom operations is performed between the corresponding images in each segment. The direction of the displacement depends on the relative position of the reference segment in the time sequence of the two films. However, in general, if this is not known, the displacement can be done in both directions. For example, where: r = reference segment; yf = first movie, the first set of operations will be made between the images comprising the following pairs: r # lf # l, r # 2 -f # 2, r # 3 -f # 3, ..., r # 10 -f # 10. The second set of operations will be made between the images comprising the following pairs: r # l-f # 2, r # 2-f # 3, r # 3-f # 4, ..., r # 10-f # ll.
The third set of operations will be made between the images comprising the following pairs: r # lf # 3, r # 2-f # 4, r # 3-f # 5, ..., f # 10-f # 12 and successively, et cetera. As you can see in this example, the displacement is done, by an image every time and only in one direction. For example, the following operations can be performed between the images in each pair. First an image of the reference segment is stabilized from the rotation and Cartesian movement, in relation to its counterpart in the first film. Then the approach operations between the images in each pair are performed. This operation can be, for example, a normalized cross-correlation (discussed above in relation to the detection of periodicity). Each operation thus produces an approach value, for example, a cross-correlation coefficient when the normalized cross-correlation is used. A set of these operations will produce numerous cross-correlation values. In the example shown in the table of Figure 15, each time the reference segment is shifted, ten new cross-correlation coefficients will be produced. The approach values produced by a set of operations can then be mapped into some kind of approach function, for example, an average function. Using the previous example, the cross-correlation coefficients are added and then divided by the number of pairs, that is, ten. Each set of operations results in, therefore, a single value, that is, an average approach, which should represent the degree of rapprochement between the reference segment and its temporal counterpart in the first film. Thus, the result of the first set of operations will be a single value, the result of the second set of operations will be another value, and so on. We can expect that the maximum average closeness will occur as a result of the operations carried out between the segments that are very similar, that is, corresponding or matched segments. In the previous example, these segments must be paired during the fifth set of operations that will be made between the images that comprise the following pairs: r # lf # 5, r # 2-f # 6, r # 3-f # 7, ..., r # 10-f # 14. The maximum average approach should, therefore, indicate the corresponding segments because each pair of images are, in fact, corresponding images, that is, they show the same morphology. The criteria could, however, not follow this algorithm. One can, for example, take into account the form of the nearest function, derived from many displaced segment positions instead of using only one of these values that is produced to be the maximum. Once the corresponding segments are identified, the first and second complete films can be synchronized in mutual relation. This will be a result of a proper frame shift revealed by the mating process, implemented in one movie relative to the other. Thus, when the two films are observed side by side, the previously treated segment will appear concurrently with the section treated later. In addition to the synchronization of the corresponding segments, the above operation also stabilizes the corresponding segments one in relation to the other. This further improves the ability to understand the displacements in the morphology. Thus, even when the catheter is reinserted into the vessel, its position and orientation are likely to have changed, independently, the images of the films prior to treatment and after treatment will stabilize in relation to each other. The number of images used for the reference segment may vary. The more images used in the mating process, the more robust and less prone to local errors. However, the bad side is the largest computation time required for calculations for each mating process as the number of pairs increases. It is important to acquire tensile films in which the traction regime remains stable and known. It is preferred that the traction regime be identical in the two acquisitions.
Many different variations of the present invention are possible. The various features described above can be incorporated individually and independently of one another. These characteristics can also be combined in several groupings.

Claims (144)

  1. CLAIMS 1. A device for forming intravascular ultrasound images, comprising: a transmitter and detector of ultrasound signals within a body lumen and a processor coupled with the transmitter and detector of ultrasound signals, the processor programmed to: a. derive a first image from the detected ultrasound signals, b. derive a second image from the detected ultrasound signals, c. compare the second image with the first image, and d. process the first and second images.
  2. 2. The device according to the claim 1, wherein the comparison of the second image with the first image includes evaluating the second image in relation to the first image.
  3. The device according to claim 1, wherein the processor programmed to derive includes at least one between processing and scanning.
  4. The device according to claim 1, further comprising a visual display coupled to the processor.
  5. The device according to claim 1, wherein the derivation includes configuring a two-dimensional array.
  6. 6. The device according to claim 5, wherein the arrangement in two dimensions is configured in at least one between polar coordinates and Cartesian coordinates.
  7. The device according to claim 5, wherein the arrangement in two dimensions is configured in polar coordinates and Cartesian coordinates.
  8. 8. The device according to the claim 5, wherein the arrangement in two dimensions has a plurality of elements, each of the plurality of elements representing an ultrasound signal detected from a previously determined spatial position.
  9. 9. The device according to the claim 2, where evaluating the second image in relation to the first image includes the evaluation of the displacement.
  10. The device according to claim 2, wherein evaluating the second image in relation to the first image includes at least one approach operation.
  11. The device according to claim 10, wherein the at least one approach operation includes at least one between cross-correlation, normalized cross-correlation and sum of absolute differences.
  12. 12. The device according to claim 11, wherein the cross-correlation includes at least one between cross-correlation and Fourier transformation.
  13. The device according to claim 2, wherein evaluating the second image in relation to the first image is carried out using at least one between Cartesian coordinates and Polar coordinates.
  14. The device according to claim 2, wherein evaluating the second image in relation to the first image is carried out in at least one dimension.
  15. The device according to claim 1, wherein the processor is further programmed to detect at least one between Cartesian displacement, rotation movement and vessel movement.
  16. 16. The device according to the claim 15, where at least one between the Cartesian displacement and the rotation movement is rigid.
  17. The device according to claim 15, wherein - at least one between the Cartesian displacement and the rotational movement is local.
  18. 18. The device according to claim 15, wherein the movement of the vessel is global.
  19. 19. The device according to claim 15, wherein the movement of the vessel is local.
  20. The device according to claim 1, wherein the processor is further programmed to automatically change the monitor in the signals for at least one between the enhancement of the image and the identification of the lumen.
  21. The device according to claim 20, wherein the processing includes at least one between the temporal displacement in the texture and the temporary filtration.
  22. 22. The device according to claim 1, wherein the processor is further programmed to automatically monitor cardiovascular periodicity.
  23. 23. The device according to the claim 1, wherein the processor is further programmed to automatically monitor the quality of the image.
  24. 24. An intravascular ultrasound imaging device comprising: a transmitter and detector of ultrasound signals within a body lumen; and a processor coupled to the transmitter and detector of ultrasound signals, the processor programmed to: a. derive a first image from a first set of detected ultrasound signals, b. derive a second image from a second set of detected ultrasound signals, c. compare the second image with the first image, d. automatically monitor the change in detected ultrasound signals, e. automatically monitor cardiovascular periodicity, and f. stabilize the second image in relation to the first image.
  25. 25. An intravascular ultrasound imaging device comprising: a transmitter and detector of ultrasound signals within a body lumen and moving through a section of the body lumen; a processor coupled to the transmitter and detector of ultrasound signals, the processor programmed to: a. deriving a first image from ultrasound signals detected during a first movement of the transmitter and detector of ultrasound signals through the section, b. deriving a second image from ultrasound signals detected during a second movement of the transmitter and detector of ultrasound signals through the section, c. compare the second image with the first image, and d. process the first and second images; and a display coupled to the processor, wherein the processor adjusts a display of the second image based on the comparison.
  26. 26. An intravascular ultrasound imaging device comprising: a transmitter and detector of ultrasound signals within a body lumen and moving through a section of the body lumen; a processor coupled to the transmitter and detector of ultrasound signals, the processor programmed to: a. derive a first image from ultrasound signals detected from a first portion of the section, b. deriving a second image from ultrasound signals detected from a second portion of the section, c. compare the second image with the first image, and d. processing the first and second images, - and a display coupled to the processor, wherein the processor adjusts a display of the second image based on the comparison.
  27. 27. An intravascular ultrasound imaging device comprising: a transmitter and ultrasound signal detector located within a body lumen and being moved through a section of the body lumen; and a processor coupled to the transmitter and detector of ultrasound signals, the processor programmed to: a. derive a first image from a set of detected ultrasound signals, b. derive a second image from a set of detected ultrasound signals, c. perform an automatic supervision, and d. Evaluate the second image in relation to the first image.
  28. 28. The device according to the claim 27, wherein the processor automatically monitors the first image and the second image of the movement of the vessel.
  29. 29. The device according to the claim 28, where the movement of the vessel is at least one of movement of the local vessel and movement of the global vessel.
  30. 30. The device according to claim 27, wherein the processor is further programmed to form an approach function.
  31. The device according to claim 30, wherein the approach function is formed using at least one of cross-correlation, normalized cross-correlation and sum of absolute differences.
  32. 32. The device according to claim 30, wherein the processor automatically monitors the approach function to view the vascular periodicity.
  33. 33. The device according to claim 32, wherein the processor automatically monitors the approach function to see the vascular periodicity using at least one between crossing the threshold, the internal approach, the Fourier transformation and the spectrum analysis.
  34. 34. The device according to claim 30, wherein the approach function is analyzed to see the quality of the image.
  35. 35. The device according to claim 30, wherein the evaluation includes displacement evaluation.
  36. 36. An intravascular ultrasound imaging device comprising: a transmitter and detector of ultrasound signals located within a body lumen, and a processor coupled to the transmitter and ultrasound signal detector, the processor programmed to: a. derive a first image from a first set of detected ultrasound signals, b. derive a second image from a second set of detected ultrasound signals, c. evaluate the second image in relation to the first image, and d. stabilize the second image in relation to the first image.
  37. 37. The device according to claim 36, further comprising a deployment coupled to the processor for displaying the first image and the second stabilized image.
  38. 38. The device according to the claim 36, wherein stabilizing the second image in relation to the first image is achieved using at least one between Cartesian coordinates and polar coordinates.
  39. 39. The device according to claim 36, wherein stabilizing the second image in relation to the first image is achieved in at least one dimension.
  40. 40. The device according to claim 36, wherein the stabilization includes stabilizing at least one Cartesian displacement, a movement of rotation and movement of the vessel.
  41. 41. The device according to claim 40, wherein the stabilization includes stabilizing at least one of global, local or rigid movement.
  42. 42. The device according to claim 36, wherein the stabilization includes stabilizing each of a plurality of positions in the second image.
  43. 43. The device according to claim 36, wherein the stabilization includes moving the second image.
  44. 44. The device according to claim 36, wherein the stabilization includes adjusting the second image based on the evaluation.
  45. 45. The device according to claim 36, wherein the processor is further programmed to limit drag.
  46. 46. The device according to claim 43, wherein the processor is further programmed to limit the deviation by adjusting the displacement of the second image using information derived from the monitoring of the cardiovascular periodicity.
  47. 47. A method for forming ultrasound images comprising the steps of: placing a transmitter and detector of ultrasound signals, within a body lumen; detect ultrasound signals; derive a first image of the detected ultrasound signals; derive a second image of the detected ultrasound signals; compare the second image with the first image; and process the first image and the second image.
  48. 48. The method according to claim 47, further comprising the steps of displaying the first image and the second image.
  49. 49. The method according to claim 47, wherein the comparison includes evaluating the second image in relation to the first image.
  50. 50. The method according to claim 47, wherein the derivation includes at least one of processing and scanning.
  51. 51. The method according to claim 47, wherein the derivation includes configuring a two-dimensional array.
  52. 52. The method according to claim 51, wherein the two-dimensional array is configured in at least one of polar coordinates and Cartesian coordinates.
  53. 53. The method according to claim 51, wherein the arrangement in two dimensions has a plurality of elements, each of the plurality of elements representing an ultrasound signal detected from a previously determined spatial position.
  54. 54. The method according to claim 49, wherein the evaluation includes displacement evaluation.
  55. 55. The method according to claim 49, wherein the evaluation includes at least one approach operation.
  56. 56. The method according to claim 55, wherein the at least one approach operation includes at least one cross-correlation, a normalized cross-correlation and sum of absolute differences.
  57. 57. The method according to claim 56, wherein the cross-correlation includes at least one of direct cross-correlation and Fourier transformation.
  58. 58. The method according to claim 49, wherein the evaluation is achieved using at least one of Cartesian coordinates and polar coordinates.
  59. 59. The method according to claim 4, wherein the evaluation is achieved in at least one dimension.
  60. 60. The method according to claim 47, further comprising the step of detecting at least one Cartesian displacement, rotation movement, and vessel movement.
  61. 61. The method according to claim 60, wherein at least one of the Cartesian displacement and rotation movement is rigid.
  62. 62. The method according to claim 60, wherein at least one of the Cartesian shift and rotational movement is local.
  63. 63. The method according to claim 60, wherein the movement of the vessel is global.
  64. 64. The method according to claim 60, wherein the movement of the vessel is local.
  65. 65. The method according to claim 47. which further comprises the step of automatically monitoring offsets in the detected ultrasound signals.
  66. 66. The method according to claim 65, further comprising the step of improving the image.
  67. 67. The method according to claim 65. further comprising the lumen identification step.
  68. 68. The method according to claim 47, further comprising the step of automatically monitoring cardiovascular periodicity.
  69. 69. The method according to claim 47. further comprising the step of automatically monitoring the quality of the image.
  70. 70. A method for forming ultrasound images comprising the steps of: placing a transmitter and detector of ultrasound signals, within a body lumen; detect ultrasound signals; deriving a first image of a first set of detected ultrasound signals; deriving a second image from a second set of detected ultrasound signals; automatic supervision; evaluate the second image in relation to the first image, - and process the first image and the second image.
  71. 71. The method according to claim 70, further comprising the step of forming an approach function.
  72. 72. The method according to claim 71, wherein the approach function is formed using at least one of cross-correlation, normalized cross-correlation and sum of absolute differences.
  73. 73. The method according to claim 70, wherein the automatic monitoring monitors the first image and the second image to see the movement of the vessel.
  74. 74. The method according to claim 73, wherein the movement of the vessel is at least one of movement of the local vessel and movement of the overall vessel.
  75. 75. The method according to claim 71, wherein the automatic monitoring monitors the approach function to see the cardiovascular periodicity.
  76. 76. The method according to claim 75, wherein the automatic monitoring includes at least one of boundary crossing, internal approach, Fourier transformation and spectrum analysis.
  77. 77. The method according to claim 71, wherein the approach function is analyzed to see the quality of the image.
  78. 78. The method according to claim 70, wherein the evaluation includes displacement evaluation.
  79. 79. A method for forming ultrasound images comprising the steps of: placing a transmitter and detector of ultrasound signals, within a body lumen; detect ultrasound signals; deriving a first image of a first set of detected ultrasound signals; deriving a second image from a second set of detected ultrasound signals; evaluate the second image in relation to the first image, - and stabilize the second image in relation to the first image.
  80. 80. The method according to claim 79, further comprising the step of displaying the first image and the second stabilized image.
  81. 81. The method according to claim 79, wherein the stabilization is achieved using at least one of Cartesian coordinates and polar coordinates.
  82. 82. The method according to claim 79, wherein the stabilization is achieved in at least one dimension.
  83. 83. The method according to claim 79, wherein the stabilization includes stabilization for at least one of Cartesian displacement, rotation movement and vessel movement.
  84. 84. The method according to claim 83, wherein the stabilization includes stabilization for at least one of global, local and rigid movement.
  85. 85. The method according to claim 79, wherein the stabilization includes stabilizing each of a plurality of positions in the second image.
  86. 86. The method according to claim 79, wherein the stabilization includes displacement of the second image.
  87. 87. The method according to claim 79, where the stabilization includes adjusting the second image based on the evaluation.
  88. 88. The method according to claim 79, further comprising the step of limiting the deviation.
  89. 89. The method according to claim 86, further comprising the step of limiting the deviation, and wherein the limitation includes adjusting the displacement of the second image using information derived from the monitoring of the cardiovascular periodicity.
  90. 90. A method for forming ultrasound images comprising the steps of: placing a transmitter and detector of ultrasound signals, within a body lumen; detect ultrasound signals; deriving a first image of a first set of detected ultrasound signals; deriving a second image from a second set of detected ultrasound signals; compare the second image with the first image, - automatically monitor the displacement in the detected ultrasound signals; automatically monitor vascular periodicity; and stabilize the second image in relation to the first image.
  91. 91. A method for forming ultrasound images comprising the steps of: placing a transmitter and detector of ultrasound signals, within a body lumen; moving the transmitter and detector of ultrasound signals through a section of the body lumen; detect ultrasound signals; deriving a first image of the ultrasound signals detected during a first movement of the transmitter and detector of ultrasound signals through the section; deriving a second image of the ultrasound signals detected during a second movement of the transmitter and detector of ultrasound signals through the section; compare the second image with the first image, - adjust the second image, - and display a second adjusted image.
  92. 92. A method for forming ultrasound images comprising the steps of: placing a transmitter and detector of ultrasound signals, within a body lumen; moving the transmitter and detector of ultrasound signals through a section of the body lumen; detect ultrasound signals; deriving a first image of the detected ultrasound signals from a first portion of the section; deriving a second image of the detected ultrasound signals from a second portion of the section; compare the second image with the first image; adjust the second image; and display a second adjusted image.
  93. 93. A method for forming ultrasound images comprising the steps of: placing a transmitter and ultrasound signal detector within a body lumen; detect ultrasound signals; deriving a first series of images from a first set of detected ultrasound signals; deriving a second series of images from a second set of detected ultrasound signals, - comparing the first series of images with the second series of images; and automatically pair the first series of images and the second series of images.
  94. 94. The method according to claim 93, wherein the pairing includes the identification of the corresponding images.
  95. 95. The method according to claim 93, wherein at least a portion of the first series of images is a reference segment, and wherein at least a portion of the second series of images is a non-reference segment.
  96. 96. The method according to claim 95, wherein the pairing includes that the non-reference segment travels through an image in relation to the reference segment.
  97. 97. The method according to claim 95, wherein the pairing includes that the non-reference segment is stabilized in relation to the reference segment.
  98. 98. The method according to claim 97, wherein the stabilization is performed individually in each of the corresponding images of the reference and non-reference segments.
  99. 99. The method according to claim 97, wherein the stabilization is performed individually in each of the corresponding images of the first series and the second series of images.
  100. 100. The method according to claim 93, wherein the pairing includes an approach operation.
  101. 101. The method according to claim 100, wherein the approach operation includes one of cross-correlation and normalized cross-correlation.
  102. 102. The method according to claim 93, wherein the first series of images is derived from a first movement of the transmitter and detector of ultrasound signals along a first section of the body lumen., and wherein the second series of images is derived from a second movement of the transmitter and detector of ultrasound signals along a second section of the body lumen.
  103. 103. The method according to claim 102, wherein the first section and the second section of the body lumen are approximately coextensive.
  104. 104. The method according to claim 93, wherein the comparison includes evaluating the second image in relation to the first image.
  105. 105. An intravascular ultrasound imaging device, comprising: a transmitter and detector of ultrasound signals located within a body lumen, and a processor coupled to the transmitter and ultrasound signal detector, the processor programmed to: a. derive a first series of images from a first set of detected ultrasound signals. b. derive a second series of images from a second set of detected ultrasound signals. c. compare the first series of images with the second series of images, and d. automatically pair the first series of images and the second series of images.
  106. 106. The device according to claim 105, wherein comparing the second image with the first image includes evaluating the second image relative to the first image.
  107. 107. The device according to claim 105, wherein the pairing includes the identification of the corresponding images.
  108. 108. The device according to claim 105, wherein at least a portion of the first series of images is a reference segment, and wherein at least a portion of the second series of images is a non-reference segment.
  109. 109. The device according to the claim 108, where mating includes that the non-reference segment travels through an image in relation to the reference segment.
  110. 110. The device according to claim 108, wherein the pairing includes that the non-reference segment is stabilized in relation to the reference segment.
  111. 111. The device according to claim 110, wherein the stabilization is performed individually in each of the corresponding images from the reference and non-reference segments.
  112. 112. The device according to claim 110, wherein the stabilization is performed individually in each of the corresponding images from the first series and the second series of images.
  113. 113. The device according to claim 105, wherein the pairing includes an approach operation.
  114. 114. The device according to claim 113, wherein the approach operation includes one of cross-correlation or normalized cross-correlation.
  115. 115. The device according to claim 105, wherein the first series of images is derived from a first movement of the transmitter and detector of ultrasound signals along a first section of the body lumen, and wherein the second series of The images are derived from a second movement of the transmitter and detector of ultrasound signals along a second section of the body lumen.
  116. 116. The device according to the claim 115, where the first section and the second section of the body lumen are approximately coextensive.
  117. 117. The device according to claim 1, further comprising a probe coupled to the transmitter and ultrasound signal detector.
  118. 118. The device according to claim 117, wherein the probe is at least one between a catheter and a guidewire.
  119. 119. The device according to claim 1, wherein the ultrasound signal transmitter and detector includes a separate transmitter and an independent detector.
  120. 120. The device according to claim 25, further comprising a probe coupled to the transmitter and ultrasonic signal detector and moving the transmitter and ultrasonic signal detector through the section.
  121. 121. The device according to claim 120, wherein the probe is at least one between a catheter and a guidewire.
  122. 122. The device according to the claim 25, wherein the transmitter and detector of ultrasound signals includes an independent transmitter and an independent detector.
  123. 123. The device according to claim 26, further comprising a probe coupled to the transmitter and ultrasound signal detector and moving the transmitter and ultrasound signal detector through the section.
  124. 124. The device according to claim 123, wherein the probe is at least one between a catheter and a guidewire.
  125. 125. The device according to the claim 26, wherein the transmitter and detector of ultrasound signals includes a separate transmitter and an independent detector.
  126. 126. The device according to the claim 27, wherein the ultrasonic signal transmitter and detector includes a separate transmitter and an independent detector.
  127. 127. The device according to claim 36, wherein the ultrasonic signal transmitter and detector includes an independent transmitter and an independent detector.
  128. 128. The method according to claim 47, wherein the ultrasound signal transmitter and detector is coupled to a probe.
  129. 129. The method according to claim 128, wherein the probe is at least one between a catheter and a guidewire.
  130. 130. The method according to claim 47, wherein the ultrasound signal transmitter and detector includes an independent transmitter and an independent detector.
  131. 131. The method according to claim 70, wherein the transmitter and detector of ultrasound signals includes an independent transmitter and an independent detector.
  132. 132. The method according to claim 79, wherein the ultrasound signal transmitter and detector includes an independent transmitter and an independent detector.
  133. 133. The method according to claim 91, wherein the ultrasound signal transmitter and detector is coupled to a probe, the probe moving the ultrasonic signal transmitter and detector.
  134. 134. The method according to claim 133, wherein the probe is at least one between a catheter and a guidewire.
  135. 135. The method according to claim 91, wherein the ultrasound signal transmitter and detector includes an independent transmitter and an independent detector.
  136. 136. The method according to claim 92, wherein the ultrasound signal transmitter and detector is coupled to a probe, the probe moving the ultrasound signal transmitter and detector.
  137. 137. The method according to claim 136, wherein the probe is at least one between a catheter and a guidewire.
  138. 138. The method according to claim 92, wherein the ultrasound signal transmitter and detector includes an independent transmitter and an independent detector.
  139. 139. The method according to claim 102, wherein the ultrasound signal transmitter and detector is coupled to a probe, the probe moving the ultrasonic signal transmitter and detector.
  140. 140. The method according to claim 139, wherein the probe is at least one between a catheter and a guidewire.
  141. 141. The method according to claim 93, wherein the ultrasound signal transmitter and detector includes a separate transmitter and an independent detector.
  142. 142. The device according to claim 115, further comprising a probe coupled to the transmitter and ultrasound signal detector and moving the transmitter and ultrasound signal detector through the section.
  143. 143. The device according to claim 142, wherein the probe is at least one between a catheter and a guidewire.
  144. 144. The device according to claim 105, wherein the ultrasound signal transmitter and detector includes an independent transmitter and an independent detector.
MXPA/A/1998/005013A 1997-06-19 1998-06-19 Improved processing of images and signals of ultrasound intravascu MXPA98005013A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US08879125 1997-06-19

Publications (1)

Publication Number Publication Date
MXPA98005013A true MXPA98005013A (en) 1999-09-20

Family

ID=

Similar Documents

Publication Publication Date Title
EP0885594B1 (en) Intravascular ultrasound enhanced image and signal processing
US9582876B2 (en) Method and apparatus to visualize the coronary arteries using ultrasound
Zhang et al. Tissue characterization in intravascular ultrasound images
Kovalski et al. Three-dimensional automatic quantitative analysis of intravascular ultrasound images
JP6106190B2 (en) Visualization method of blood and blood likelihood in blood vessel image
Chung et al. Freehand three-dimensional ultrasound imaging of carotid artery using motion tracking technology
US20060173292A1 (en) Biological tissue motion trace method and image diagnosis device using the trace method
NZ330692A (en) Intravascular ultrasound imaging, image enhancement to compensate for movement of ultrasound probe and lumen
KR20010014492A (en) Automated segmentation method for 3-dimensional ultrasound
CA2298282A1 (en) Semi-automated segmentation method for 3-dimensional ultrasound
DE112007001982T5 (en) Pulse echo device
Hamers et al. A novel approach to quantitative analysis of intra vascular ultrasound images
Meier et al. Automated morphometry of coronary arteries with digital image analysis of intravascular ultrasound
WO2004054447A1 (en) Ultrasonic apparatus for estimating artery parameters
Katouzian et al. Automatic detection of luminal borders in IVUS images by magnitude-phase histograms of complex brushlet coefficients
MXPA98005013A (en) Improved processing of images and signals of ultrasound intravascu
Dijkstra et al. Quantitative coronary ultrasound: State of the art
Matsumoto et al. Cardiac phase detection in intravascular ultrasound images
Gangidi et al. Automatic segmentation of intravascular ultrasound images based on temporal texture analysis
Gangidi et al. A New Feature Extraction Approach for Segmentation of Intravascular Ultrasound Images
Manandhar et al. An automated robust segmentation method for intravascular ultrasound images
Hernàndez-Sabaté et al. The Benefits of IVUS Dynamics for Retrieving Stable Models of Arteries
WO2022013832A2 (en) A system for visual data analysis of ultrasound examinations with and without a contrast medium, for early automated diagnostics of pancreatic pathologies
Brusseau et al. Fully automated endoluminal contour detection in intracoronary ultrasound images: a pre-processing for intravascular elastography