Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.
It is to be understood that the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of the associated listed items.
In order to provide a thorough understanding of the present invention, a detailed structure will be set forth in the following description in order to explain the present invention. Alternative embodiments of the invention are described in detail below, however, the invention may be practiced in other embodiments that depart from these specific details.
In the following, an ultrasound imaging system according to an embodiment of the present invention is first described with reference to fig. 1, and fig. 1 shows a schematic structural block diagram of an ultrasound imaging system 100 according to an embodiment of the present invention.
As shown in fig. 1, the ultrasound imaging system 100 includes an ultrasound probe 110, transmit circuitry 112, receive circuitry 114, a processor 116, and a display 118. Further, the ultrasound imaging system may further include a transmit/receive selection switch 120 and a beam forming module 122, and the transmit circuit 112 and the receive circuit 114 may be connected to the ultrasound probe 110 through the transmit/receive selection switch 120.
The ultrasound probe 110 includes a plurality of transducer elements, which may be arranged in a line to form a linear array, or in a two-dimensional matrix to form an area array, or in a convex array. The transducer elements are used for transmitting ultrasonic waves according to the excitation electric signals or converting the received ultrasonic waves into electric signals, so that each transducer element can be used for realizing the mutual conversion of the electric pulse signals and the ultrasonic waves, thereby realizing the transmission of the ultrasonic waves to tissues of a target area of a measured object and also receiving ultrasonic wave echoes reflected back by the tissues. In ultrasound detection, which transducer elements are used for transmitting ultrasound waves and which transducer elements are used for receiving ultrasound waves can be controlled by a transmitting sequence and a receiving sequence, or the transducer elements are controlled to be time-slotted for transmitting ultrasound waves or receiving echoes of ultrasound waves. The transducer elements participating in the ultrasonic wave transmission can be simultaneously excited by the electric signals, so that the ultrasonic waves are transmitted simultaneously; alternatively, the transducer elements participating in the ultrasound beam transmission may be excited by several electrical signals with a certain time interval, so as to continuously transmit ultrasound waves with a certain time interval.
During ultrasound imaging, the processor 116 controls the transmit circuitry 112 to send the delay focused transmit pulses to the ultrasound probe 110 through the transmit/receive select switch 120. The ultrasonic probe 110 is excited by the transmission pulse to transmit an ultrasonic beam to the tissue of the target region of the object to be measured, receives an ultrasonic echo with tissue information reflected from the tissue of the target region after a certain time delay, and converts the ultrasonic echo back into an electrical signal again. The receiving circuit 114 receives the electrical signals generated by the ultrasound probe 110, obtains ultrasound echo signals, and sends the ultrasound echo signals to the beam forming module 122, and the beam forming module 122 performs processing such as focusing delay, weighting, and channel summation on the ultrasound echo data, and then sends the ultrasound echo data to the processor 116. The processor 116 performs signal detection, signal enhancement, data conversion, logarithmic compression, and the like on the ultrasonic echo signal to form an ultrasonic image. The ultrasound images obtained by the processor 116 may be displayed on the display 118 or may be stored in the memory 124.
Alternatively, the processor 116 may be implemented as software, hardware, firmware, or any combination thereof, and may use a single or multiple Application Specific Integrated Circuits (ASICs), a single or multiple general purpose Integrated circuits, a single or multiple microprocessors, a single or multiple programmable logic devices, or any combination of the foregoing, or other suitable circuits or devices. Also, the processor 116 may control other components in the ultrasound imaging system 100 to perform the respective steps of the methods in the various embodiments herein.
The display 118 is connected with the processor 116, and the display 118 may be a touch display screen, a liquid crystal display screen, or the like; alternatively, the display 118 may be a separate display, such as a liquid crystal display, a television, or the like, separate from the ultrasound imaging system 100; alternatively, the display 118 may be a display screen of an electronic device such as a smartphone, tablet, etc. The number of the display 118 may be one or more.
The display 118 may display the ultrasound image obtained by the processor 116. In addition, the display 118 can provide a graphical interface for human-computer interaction for the user while displaying the ultrasound image, and one or more controlled objects are arranged on the graphical interface, so that the user can input operation instructions by using the human-computer interaction device to control the controlled objects, thereby executing corresponding control operation. For example, an icon is displayed on the graphical interface, and the icon can be operated by the man-machine interaction device to execute a specific function, such as drawing a region-of-interest box on the ultrasonic image.
Optionally, the ultrasound imaging system 100 may further include a human-computer interaction device other than the display 118, which is connected to the processor 116, for example, the processor 116 may be connected to the human-computer interaction device through an external input/output port, which may be a wireless communication module, a wired communication module, or a combination thereof. The external input/output port may also be implemented based on USB, bus protocols such as CAN, and/or wired network protocols, etc.
The human-computer interaction device may include an input device for detecting input information of a user, for example, control instructions for the transmission/reception timing of the ultrasonic waves, operation input instructions for drawing points, lines, frames, or the like on the ultrasonic images, or other instruction types. The input device may include one or more of a keyboard, mouse, scroll wheel, trackball, mobile input device (e.g., mobile device with touch screen display, cell phone, etc.), multi-function knob, and the like. The human interaction means may also include an output device such as a printer.
The ultrasound imaging system 100 may also include a memory 124 for storing instructions executed by the processor, storing received ultrasound echoes, storing ultrasound images, and so forth. The memory may be a flash memory card, solid state memory, hard disk, etc. Which may be volatile memory and/or non-volatile memory, removable memory and/or non-removable memory, etc.
It should be understood that the components included in the ultrasound imaging system 100 shown in fig. 1 are merely illustrative and that more or fewer components may be included. The invention is not limited in this regard.
In the following, an ultrasound imaging method of a fetal target site according to an embodiment of the present invention will be described with reference to fig. 2, which may be implemented in the ultrasound imaging system 100 described above. Fig. 2 is a schematic flow chart of a method 200 of ultrasound imaging of a fetal target site in accordance with an embodiment of the present invention.
As shown in fig. 2, the method 200 for ultrasonic imaging of a fetal target site according to one embodiment of the present invention comprises the following steps:
in step S210, transmitting an ultrasonic wave to a fetus to be detected, and receiving an echo of the ultrasonic wave to obtain an echo signal of the ultrasonic wave;
in step S220, obtaining a first ultrasound image of the fetus based on the echo signal, and displaying the first ultrasound image in a display area;
in step S230, identifying a first target region in the first ultrasound image corresponding to a target site of the fetus, the target site including a fetal cranium, a fetal heart, a fetal abdomen, a fetal limb, a fetal head and neck, or a fetal whole body of the fetus;
in step S240, the first ultrasound image is magnified and translated based on the first target area to obtain a locally magnified image including the target portion, and the locally magnified image is displayed in the display area, where an area of the target portion corresponding to the locally magnified image is a second target area, the magnification is used to make a second proportion of the second target area in the locally magnified image greater than a first proportion of the first target area in the first ultrasound image, and the translation is used to make the second target area located in a middle position of the display area.
The ultrasonic imaging method 200 for the fetal target part, provided by the embodiment of the invention, can automatically identify and detect the area corresponding to the fetal target part in the ultrasonic image, amplify and center the area, and does not need manual amplification and translation of a doctor, so that the prenatal screening efficiency of the doctor is improved, and the problem of shortage of ultrasonic doctors is effectively solved.
In particular, with reference to the ultrasound imaging system 100 of fig. 1, during ultrasound imaging of a fetus to be detected, the transmit circuitry 112 sends a set of delay-focused transmit pulses to the ultrasound probe 110 to excite the ultrasound probe 110 to transmit ultrasound waves along a two-dimensional scan plane toward the target tissue. The receiving circuit 114 controls the ultrasonic probe 110 to receive the ultrasonic echo reflected by the target tissue, and then converts the ultrasonic echo into an electrical signal, the beam synthesis module 112 performs corresponding delay and weighted summation processing on the echo signal obtained by multiple transmissions and receptions to realize beam synthesis, and then the beam synthesis module sends the ultrasonic echo signal to the processor 116 to perform processing such as log compression, dynamic range adjustment, digital scan conversion and the like on the ultrasonic echo signal, so as to generate a first ultrasonic image. The first ultrasound image may be a two-dimensional ultrasound image or a three-dimensional ultrasound image.
After the first ultrasound image is obtained, a first target region in the first ultrasound image corresponding to the target site of the fetus is identified. Wherein the target site comprises fetal cranium of fetus, fetal heart, fetal abdomen, fetal limbs, fetal head and neck or fetal whole body. The fetus to be detected may be a fetus in an early or late pregnancy, and fetuses in different pregnancies may have different target sites. During pregnancy examinations, the physician usually performs ultrasound imaging only for one target site, and therefore the first ultrasound image usually includes only a first target region corresponding to one target site.
The method comprises the steps of acquiring a first ultrasonic image, acquiring a second ultrasonic image, and identifying a target region in the first ultrasonic image according to the acquired first ultrasonic image, wherein a machine learning method can be adopted, characteristics or rules of the target region and a non-target region can be distinguished in a learning database, and then the first target region in the first ultrasonic image is identified according to the learned characteristics or rules. Firstly, a database is required to be constructed, the database comprises a plurality of training sample images with calibration results, and the calibration results can be region-of-interest frames (ROI frames) corresponding to target parts, and also can be segmentation masks (masks) for accurately segmenting the regions corresponding to the target parts.
The machine learning method specifically includes the following methods: the first method is a traditional sliding window-based method, and is characterized in that matching is performed according to features extracted by a sliding window and a database, whether the current sliding window is an area corresponding to a target part or not is determined, and corresponding categories of the current sliding window are acquired at the same time. The second method is a boundary frame (Bounding-Box) detection method based on deep learning, which performs feature learning and parameter regression on a boundary frame of an interested area in a constructed database by stacking a base layer convolution layer and a full connection layer, so as to identify an interested area frame corresponding to a target part in a first ultrasonic image. The third method is an end-to-end semantic segmentation network method based on deep learning, which is similar to the second method and is different in that a full connection layer is removed, an upsampling layer or a deconvolution layer is added, and finally a first target region corresponding to a first ultrasonic image of an input network and a corresponding category of the first target region are obtained.
After the first target area is identified, the first ultrasonic image is subjected to amplification processing and translation processing in a self-adaptive mode based on the first target area, so that a local amplification image containing the target part is obtained, and the local amplification image is displayed in the display area. The corresponding area of the target part in the local enlarged image is a second target area, and the second proportion of the second target area in the local enlarged image is larger than the first proportion of the first target area in the first ultrasonic image through enlargement processing. Through the translation process, the second target area is located at the central position of the display area. Therefore, the local magnified image with the proper size and the centered position of the target part can be obtained without manual magnification and translation of a doctor.
Illustratively, the methods of the amplification process include, but are not limited to, the following two methods: the first method is to enlarge according to the size of the first target area; the second method is to perform amplification according to a preset amplification factor.
When the first method is adopted, firstly, the first proportion of a first target area in a first ultrasonic image is determined, and a target value of the second proportion of a second target area in a local enlarged image is obtained; and determining the magnification according to the target values of the first proportion and the second proportion, and magnifying the first ultrasonic image according to the determined magnification. The enlargement according to the size of the first target area can ensure that the size of the second target area in the enlarged local enlarged image meets the expectation, and avoid the situation that the second target area is too large or too small.
The first ratio may be a ratio of a width of the first target region to a width of the first ultrasound image, and correspondingly, the second ratio is a ratio of a width of the second target region to a width of the locally-enlarged image, and the ratio of the width of the enlarged second target region to the width of the locally-enlarged image may be greater than a preset threshold value by performing enlargement processing according to the width of the first target region. Alternatively, the first ratio may be a ratio of a height of the first target region to a height of the first ultrasound image, and accordingly, the second ratio may be a ratio of a height of the second target region to a height of the locally enlarged image, or the first ratio may be a ratio of an area of the first target region to an area of the first ultrasound image, and accordingly, the second ratio may be a ratio of an area of the second target region to an area of the locally enlarged image. Of course, when the enlargement processing is performed, the enlargement may also be performed based on at least two of the width, the height, and the area of the first target region at the same time, so that the proportions of the at least two of the width, the height, and the area of the enlarged second target region in the locally enlarged image are all greater than the preset threshold.
As described above, when identifying the first target region, the finally determined first target region may be a region defined by a region of interest box (ROI box) for framing the target site, and the width, height and area of the first target region are respectively the width, height and area of the region of interest box, that is, the first ultrasound image is enlarged according to the width, height or area of the region of interest box.
Alternatively, the first target region is a region defined by a split Mask (Mask) of the target portion, and the width, height and area of the first target region are the width, height and area of the split Mask, respectively. The split mask is usually in an irregular shape, wherein the area of the split mask can be determined according to the number of pixel points contained in the split mask; methods for determining the width and height of the split mask include, but are not limited to, the methods described below.
Illustratively, one method of determining the width and height of the split mask is: determining a centroid of the segmentation mask; determining the distance between a horizontal line passing through the center of mass of the segmentation mask and the left and right intersection points of the boundary of the segmentation mask as the width of the segmentation mask; and determining the distance between the vertical line passing through the center of mass of the split mask and the upper and lower intersection points of the boundary of the split mask as the height of the split mask. One method of determining the width and height of the split mask is: searching the maximum width of the segmentation mask in the horizontal direction, and determining the maximum width of the segmentation mask in the horizontal direction as the width of the segmentation mask; and searching the maximum height of the segmentation mask in the vertical direction, and determining the maximum height of the segmentation mask in the vertical direction as the height of the segmentation mask. Wherein, the horizontal direction and the vertical direction are respectively the horizontal direction and the vertical direction of the display area.
Another method for magnifying the first ultrasound image is to magnify the first ultrasound image according to a preset magnification. Because the sizes of the first target regions corresponding to different types of target portions are usually different, at least two different types of target portions may correspond to different magnifications, for example, the fetal cranium, the fetal heart, the fetal abdomen, the fetal limbs, the fetal head and neck, or the fetal whole body of the fetus may be respectively provided with respective magnifications, so that the size of the magnified second target region can better meet the clinical requirements. Specifically, the type of the target portion may be determined according to the first target region, and the preset magnification may be determined according to the type of the target portion. Optionally, the same magnification may be set for different types of target sites such as fetal cranium, fetal heart, fetal abdomen, fetal limbs, fetal head and neck, or fetal whole body.
For example, since the display area is fixed, after the first ultrasound image is zoomed in, it is not possible to ensure that the zoomed-in first target area is located at the central position of the display area, or even that the zoomed-in first target area is located inside the display area, and therefore, the first ultrasound image after being zoomed in needs to be translated, so that the target portion is located at the central position in the locally zoomed-in image displayed in the display area.
Specifically, the center position of the enlarged first target region is determined in the enlarged first ultrasound image, and the center position of the display region is determined. And then, determining the direction and the distance of translation processing according to the central position of the display area and the central position of the enlarged first target area, and performing translation processing on the enlarged first ultrasonic image according to the determined direction and distance so as to move the central position of the enlarged first target area to the central position of the display area. Wherein the center position of the first target region may be a geometric center of the region of interest box or a centroid of the segmentation mask. In some embodiments, the first ultrasound image may be subjected to a translation process, the center of the first target region is moved to the center position of the display region, and then the first ultrasound image subjected to the translation process is subjected to an enlargement process with the center position of the display region as a reference.
And after the amplification processing and the translation processing are finished, displaying the local amplification image in the display area, wherein the second target area corresponding to the target part is positioned in the center of the display area and occupies a larger proportion of the display area. The locally enlarged image is a portion of the first enlarged ultrasound image, and the second target region is the first enlarged target region. Thereafter, measurements, annotations, pose bitmaps, and other operations may be performed in the second ultrasound image. The user may select to manually measure, annotate, or perform other operations, or may select to automatically measure, annotate, or perform other operations.
For example, the measurement item corresponding to the target portion may be measured according to the second target area to obtain a measurement result, and the measurement result may be displayed. Wherein, the corresponding measurement items of the fetal craniocerebrum comprise double apical diameter, head circumference, transverse diameter of cerebellum, width of posterior fossa, and the like; the corresponding measurement items of the fetal heart comprise the ventricular internal diameter, the atrial internal diameter, the mandrel angle, the cardio-thoracic ratio, the oval hole diameter, the internal diameters of the bivalve and the trivalve, the aortic valve, the ascending aorta internal diameter, the pulmonary artery branch internal diameter, the isthmus internal diameter, the arterial duct internal diameter, the pulmonary valve, the aortic internal diameter and the like; the corresponding measurement items of the abdomen of the fetus comprise abdominal circumference, kidney perimeter, kidney area and the like; the corresponding measurement items of the fetal head and neck comprise a fetal neck transparent layer (NT) and the like; the corresponding measurement items of the whole body of the fetus comprise the diameter of the top hip and the like. The above-mentioned measurement items may be automatically measured and displayed, or allow the user to manually measure and display.
In addition to the measurement of the measurement items, the whole section or the specific target part name corresponding to the second target area can be manually or automatically determined according to the second target area, the posture diagram corresponding to the section is displayed, and the section name or the target part name can be annotated in the second ultrasonic image. Other functions may also be performed based on the second target region in the locally magnified image, such as obstetrical slice quality control, cardiac function analysis, and the like.
Illustratively, the user may be allowed to select the content to be displayed, such as the position and size of the first target region of the first ultrasound image, the position and size of the second target region in the locally enlarged image, the magnification, the translation distance, the measurement item name and measurement result, the annotation content, the volume map, and the like. The enlarged and translated locally enlarged image also supports restoration to the original first ultrasonic image, and the user can manually adjust the magnification factor and the translation distance on the basis of the locally enlarged image.
Further, the first ultrasound image and/or the enlarged partial image may be stored, and it is supported that all the contents displayed above are stored.
In summary, the ultrasonic imaging method 200 for the fetal target site according to the embodiment of the present invention automatically identifies and detects the area corresponding to the fetal target site in the ultrasonic image, and enlarges and centers the area, so as to assist the doctor to perform the examination quickly, improve the obstetric examination efficiency of the doctor, and alleviate the problem of shortage of the ultrasonic doctor.
Another aspect of the embodiments of the present invention provides an ultrasonic imaging method for a fetal target site, including the following steps:
transmitting ultrasonic waves to a fetus to be detected, and receiving an echo of the ultrasonic waves to obtain an echo signal of the ultrasonic waves;
obtaining a first ultrasonic image of the fetus based on the echo signal and displaying the first ultrasonic image in a display area;
identifying a first target region in the first ultrasound image corresponding to a target site of the fetus, the target site comprising a fetal cranium, a fetal heart, a fetal abdomen, a fetal limb, a fetal head and neck, or a fetal whole body of the fetus;
performing amplification processing or translation processing on the first ultrasonic image based on the first target area to obtain a locally amplified image containing the target part, and displaying the locally amplified image in the display area; the corresponding area of the target part in the local magnified image is a second target area, the magnification processing is used for enabling a second proportion of the second target area in the local magnified image to be larger than a first proportion of the first target area in the first ultrasonic image, and the translation processing is used for enabling the second target area to be located in the middle of the display area.
This embodiment differs from the embodiment shown in fig. 2 in that the first ultrasound image may be subjected to an enlargement process or a translation process. Other related descriptions can be understood by referring to the embodiment of fig. 2, and are not described herein again.
In another aspect, an embodiment of the present invention provides an ultrasound imaging method for a fetal target site, as shown in fig. 4, an ultrasound imaging method 400 for a fetal target site according to another embodiment of the present invention includes the following steps:
in step S410, transmitting an ultrasonic wave to a fetus to be detected, and receiving an echo of the ultrasonic wave to obtain an echo signal of the ultrasonic wave;
in step S420, obtaining a first ultrasound image of the fetus based on the echo signal, and displaying the first ultrasound image in a display area;
in step S430, identifying a first target region in the first ultrasound image corresponding to a target site of the fetus, the target site including a fetal cranium, a fetal heart, a fetal abdomen, a fetal limb, a fetal head and neck, or a fetal whole body of the fetus;
in step S440, transmitting a second ultrasonic wave to the target site based on the first target region, and receiving an echo of the second ultrasonic wave to obtain an echo signal of the second ultrasonic wave;
in step S450, a second ultrasound image corresponding to the first target region is generated according to an echo signal of the second ultrasound wave, and the second ultrasound image is displayed in the display region; the corresponding area of the target part in the second ultrasonic image is a second target area, and the proportion of the second target area in the second ultrasonic image is greater than that of the first target area in the first ultrasonic image.
In the method 200 for ultrasonic imaging of a fetal target site, the ultrasound image is zoomed in to a rear-end zoom, i.e., the first ultrasound image is zoomed in as a whole. In contrast, in the method 400 for ultrasonic imaging of a fetal target region, the method of zooming in is front-end zooming, i.e., only the corresponding image of the target region is zoomed in. And acquiring a second ultrasonic image again according to the first target area identified in the first ultrasonic image so as to obtain a second ultrasonic image with an enlarged area corresponding to the target part.
Illustratively, the second target region is located in a central position of the second ultrasound image, i.e. the second ultrasound image is enlarged and centered in a region corresponding to the target site.
Illustratively, a first proportion of the first target region in the first ultrasound image may be determined; acquiring a target value of the second proportion of a second target area in a second ultrasonic image; and determining a magnification according to the target values of the first proportion and the second proportion, transmitting a second ultrasonic wave according to the determined magnification, and receiving an echo of the second ultrasonic wave to obtain an echo signal of the second ultrasonic wave, so that the second proportion of the second target area in the second ultrasonic image reaches the target value. In some embodiments, the second ultrasound image may also be acquired according to a preset magnification. Wherein, the preset magnification corresponding to different target parts is the same or different.
Further, front-end and back-end magnification may be combined, in particular, a second target region in the second ultrasound image may be identified; and performing amplification processing on the second ultrasonic image based on the second target area to obtain a locally amplified image, and displaying the locally amplified image in a display area, wherein the area of the target part corresponding to the locally amplified image is a third target area, and the amplification processing is used for enabling a second proportion of the third target area in the locally amplified image to be larger than a first proportion of the second target area in the second ultrasonic image. In this embodiment, if the second ultrasound image is acquired at a first magnification a1 and then magnified at a second magnification a2, the final magnification of the third target region is a1 × a2 compared to the first target region.
For example, the method for magnifying the second ultrasound image based on the second target region is similar to the method for magnifying the first ultrasound image based on the first target region in the ultrasound imaging method 200 for the fetal target site, and can be understood with reference to the related description of the embodiment shown in fig. 2, and is not repeated herein. In one example, a first proportion of the second target area in the second ultrasonic image is determined, a target value of a second proportion of the third target area in the locally enlarged image is obtained, an enlargement factor is determined according to the target values of the first proportion and the second proportion, and the second ultrasonic image is enlarged according to the enlargement factor. In another example, the second ultrasound image may be magnified at a preset magnification. Wherein, the preset magnification corresponding to different target parts is the same or different.
In summary, in the ultrasonic imaging method 400 for a fetal target portion according to the embodiment of the present invention, the first target region corresponding to the target portion of the fetus is automatically identified and detected in the ultrasonic image, and the second ultrasonic image is acquired according to the first target region, so that the second target region corresponding to the target portion in the second ultrasonic image is enlarged and centered, thereby assisting the doctor to perform the examination quickly, improving the obstetric examination efficiency of the doctor, and alleviating the problem of shortage of the ultrasonic doctor.
The embodiment of the present invention further provides an ultrasound imaging system, which is used for implementing the ultrasound imaging method 200 of the fetal target site or the ultrasound imaging method 400 of the fetal target site. Referring back to fig. 1, the ultrasound imaging system may be implemented as the ultrasound imaging system 100 shown in fig. 1, the ultrasound imaging system 100 may include an ultrasound probe 110, a transmitting circuit 112, a receiving circuit 114, a processor 116, and a display 118, and optionally, the ultrasound imaging system 100 may further include a transmitting/receiving selection switch 120 and a beam forming module 122, the transmitting circuit 112 and the receiving circuit 114 may be connected to the ultrasound probe 110 through the transmitting/receiving selection switch 120, and the description of each component may refer to the above description, which is not repeated here.
The transmitting circuit 112 is used for exciting the ultrasonic probe 110 to transmit ultrasonic waves to the target tissue; the receiving circuit 112 is used for controlling the ultrasonic probe 110 to receive the echo of the ultrasonic wave to obtain an ultrasonic wave echo signal. The processor 116 may perform the steps of the method 200 of ultrasound imaging of a fetal target site to obtain a locally enlarged image and control the display 118 to display the locally enlarged image obtained by the processor 116. The processor 116 can also perform the steps of the method 400 for ultrasound imaging of a fetal target site to obtain a second ultrasound image and control the display 118 to display the second ultrasound image obtained by the processor 116.
Only the main functions of the components of the ultrasound imaging system are described above, and for more details, reference is made to the related description of the ultrasound imaging method 200 for the fetal target region and the ultrasound imaging method 400 for the fetal target region, which are not described herein again.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Moreover, those skilled in the art will appreciate that although some embodiments described herein include some features included in other embodiments, not others, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiment of the present invention or the description thereof, and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.