WO2023108421A1 - 一种流速检测方法、系统和存储介质 - Google Patents

一种流速检测方法、系统和存储介质 Download PDF

Info

Publication number
WO2023108421A1
WO2023108421A1 PCT/CN2021/137949 CN2021137949W WO2023108421A1 WO 2023108421 A1 WO2023108421 A1 WO 2023108421A1 CN 2021137949 W CN2021137949 W CN 2021137949W WO 2023108421 A1 WO2023108421 A1 WO 2023108421A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
detection point
image data
array elements
group
Prior art date
Application number
PCT/CN2021/137949
Other languages
English (en)
French (fr)
Inventor
郭光第
海德奥利弗
Original Assignee
武汉联影医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 武汉联影医疗科技有限公司 filed Critical 武汉联影医疗科技有限公司
Priority to PCT/CN2021/137949 priority Critical patent/WO2023108421A1/zh
Priority to CN202180007818.4A priority patent/CN114938660A/zh
Priority to EP21967561.8A priority patent/EP4424240A1/en
Priority to US17/935,075 priority patent/US20230186491A1/en
Publication of WO2023108421A1 publication Critical patent/WO2023108421A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P5/00Measuring speed of fluids, e.g. of air stream; Measuring speed of bodies relative to fluids, e.g. of ship, of aircraft
    • G01P5/26Measuring speed of fluids, e.g. of air stream; Measuring speed of bodies relative to fluids, e.g. of ship, of aircraft by measuring the direct influence of the streaming fluid on the properties of a detecting optical wave
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • G01S15/8906Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques
    • G01S15/8909Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques using a static transducer configuration
    • G01S15/8915Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques using a static transducer configuration using a transducer array
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/50Systems of measurement based on relative movement of target
    • G01S13/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/50Systems of measurement, based on relative movement of the target
    • G01S15/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • G01S15/8906Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques
    • G01S15/8909Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques using a static transducer configuration
    • G01S15/8915Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques using a static transducer configuration using a transducer array
    • G01S15/8927Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques using a static transducer configuration using a transducer array using simultaneously or sequentially two or more subarrays or subapertures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • G01S15/8906Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques
    • G01S15/8979Combined Doppler and pulse-echo imaging systems
    • G01S15/8984Measuring the velocity vector
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Definitions

  • This specification relates to the field of flow velocity detection, and in particular to a flow velocity detection method and system.
  • Flow velocity detection can detect the moving speed of the target object based on the image data of the target object.
  • the image data of the target object can be obtained based on the phase change of the echo data at the same position of the target object at different times.
  • the system can only detect the phase change along the emission direction. In order to obtain the accurate flow velocity of the target object, it is necessary to adjust the emission deflection angle of the scanning signal or manually adjust the position of the scanning probe.
  • One aspect of the present specification provides a flow velocity detection method, the method comprising: acquiring image data; determining at least one detection point based on the image data and phase change-related parameters; based on the phase change-related parameters, and at least one detection point and The positional relationship between at least one transmitting point and multiple receiving points determines the first flow velocity of at least one detecting point.
  • One aspect of the present specification provides a flow rate detection system, the system comprising: at least one storage medium storing at least one set of instructions; and at least one processor configured to communicate with the at least one storage medium, wherein, When executing the at least one set of instructions, the at least one processor is instructed to cause the system to: acquire image data; determine a parameter related to a phase change of at least one detection point based on the image data; based on a parameter related to a phase change , and the positional relationship between the at least one detection point, the at least one emission point, and the plurality of receiving points, and determine the first flow velocity of the at least one detection point.
  • One aspect of this specification provides a flow velocity detection system, the system comprising: an image data acquisition module, used to acquire image data; a parameter determination module, used to determine at least one detection point based on the image data and a parameter related to phase change; A velocity determination module, configured to determine the first flow velocity of at least one detection point based on parameters related to phase change, and the positional relationship between at least one detection point, at least one transmitting point, and multiple receiving points.
  • Another aspect of the present specification provides a computer-readable storage medium, the storage medium stores computer instructions, and after the computer reads the computer instructions in the storage medium, the computer executes the flow velocity detection method.
  • the array elements are grouped, and the positional relationship between the transmitting point (transmitting focus), the receiving point (array element) and/or the detection point is used to determine the two-dimensional flow velocity of each detection point in combination with the phase change , compared with the multi-angle emission mode, the phase change of the reflected signal at multiple angles can be obtained in one emission, and the flow velocity of the target object perpendicular to the emission direction can be detected, which improves the utilization rate of the system for data and is conducive to improving the system Accuracy of frame frequency and speed evaluation; using the full-aperture emission mode can improve imaging efficiency, and at the same time, the non-focused wave mode can be used, so that the emitted scanning signal can point to the same focus position, which can not only improve the signal-to-noise ratio, but also increase The frame rate of the system, thereby improving the time resolution capability of the system; based on the image data, using the optical flow method to calculate the second flow velocity of at least one detection point, which can convert
  • Fig. 1 is a schematic diagram of an application scenario of a flow rate detection system according to some embodiments of the present specification
  • Fig. 2 is an exemplary block diagram of a flow rate detection system according to some embodiments of the present specification
  • Fig. 3 is an exemplary flowchart of a method for determining a first flow rate of at least one detection point according to some embodiments of the present specification
  • Fig. 4a is an exemplary schematic diagram of divergent waves shown according to some embodiments of the present specification.
  • Fig. 4b is an exemplary schematic diagram of a plane wave according to some embodiments of the present specification.
  • Figure 5a is an exemplary schematic diagram of full-aperture emission in a diverging wave mode according to some embodiments of the present specification
  • Figure 5b is an exemplary schematic diagram of full-aperture emission in a plane wave mode according to some embodiments of the present specification
  • Fig. 6 is an exemplary schematic diagram of image data shown according to some embodiments of the specification.
  • Fig. 7 is a first flow rate method for determining at least one detection point based on parameters related to phase changes and the positional relationship between at least one detection point and at least one emission point and multiple receiving points according to some embodiments of the present specification Exemplary flow diagram for ;
  • Fig. 8 is an exemplary flowchart of a second flow rate method for determining at least one detection point according to some embodiments of the present specification
  • Fig. 9 is an exemplary flowchart of a flow rate correction method according to some embodiments of the present specification.
  • system means for distinguishing different components, elements, parts, parts or assemblies of different levels.
  • the words may be replaced by other expressions if other words can achieve the same purpose.
  • Fig. 1 is a schematic diagram of an application scenario of a flow velocity detection system according to some embodiments of the present specification.
  • the flow velocity detection system 100 can determine the two-dimensional flow velocity by implementing the methods and/or processes disclosed in this specification.
  • the flow rate detection system 100 may include: a scanning device 110 , a processing device 120 , a terminal device 130 , a network 140 and/or a storage device 150 and the like.
  • scanning device 110 may be connected to processing device 120 via network 140 as shown in FIG. 1 .
  • the scanning device 110 may be directly connected to the processing device 120 (as indicated by the dotted double-headed arrow connecting the scanning device 110 and the processing device 120 ).
  • storage device 150 may be connected to processing device 120 directly or through network 140 .
  • terminal device 130 may be connected to processing device 120 directly (as indicated by a dashed double-headed arrow connecting terminal device 130 and processing device 120 ) and/or via network 140 .
  • the scanning device 110 can scan the target object to obtain scanning data.
  • the scanning device 110 may transmit a signal (for example, transmit ultrasonic waves) to a target object or a part thereof, and receive a reflected signal (for example, reflect ultrasonic waves) of the target object or a part thereof.
  • scanning device 110 may comprise a scanning device. Scanning devices may be used to transmit signals and/or receive signals. Scanning equipment may include, but is not limited to, ultrasound probes and radar probes.
  • the processing device 120 may process data and/or information obtained from the scanning device 110 , the terminal device 130 and/or the storage device 150 .
  • the processing device 120 may determine the first flow velocity of at least one detection point based on the image data.
  • the processing device 120 may determine the first flow velocity of at least one detection point based on the image data.
  • the processing device 120 may determine the target flow rate of at least one detection point based on the first flow rate and the second flow rate of at least one detection point.
  • the processing device 120 may include a central processing unit (CPU), a digital signal processor (DSP), a system on chip (SoC), a microcontroller unit (MCU), etc. and/or any combination thereof.
  • processing device 120 may include a computer, a user console, a single server or groups of servers, or the like. Server groups can be centralized or distributed. In some embodiments, processing device 120 may be local or remote. For example, the processing device 120 may access information and/or data stored in the scanning device 110 , the terminal device 130 and/or the storage device 150 via the network 140 . For another example, the processing device 120 may directly connect to the scanning device 110, the terminal device 130 and/or the storage device 150 to access stored information and/or data. In some embodiments, the processing device 120 may be implemented on a cloud platform.
  • a cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, inter-cloud, multi-cloud, etc., or any combination thereof.
  • processing device 120 or a portion of processing device 120 may be integrated into scanning device 110 .
  • the terminal device 130 may receive an instruction (for example, an ultrasonic inspection mode), and/or display a flow velocity detection result and/or an image to the user.
  • the terminal device 130 may include a mobile device 131, a tablet computer 132, a notebook computer 133, etc., or any combination thereof. In some embodiments, terminal device 130 may be part of processing device 120 .
  • Network 140 may include any suitable network that facilitates the exchange of information and/or data with flow rate detection system 100 .
  • one or more components of the flow rate detection system 100 can communicate with one or more other components of the flow rate detection system 100 through the network 140 Communication Information and/or Data.
  • the processing device 120 may receive a user instruction from a terminal device via a network.
  • the scanning device 110 may obtain the ultrasonic emission parameters from the processing device 120 via the network 140 .
  • Network 140 may be and/or include a public network (e.g., the Internet), a private network (e.g., a local area network (LAN), a wide area network (WAN)), a wired network (e.g., an Ethernet network), a wireless network (e.g., an 802.11 network, a Wi-Fi network), cellular network (e.g., Long Term Evolution (LTE) network), frame relay network, virtual private network (“VPN”), satellite network, telephone network, router, hub, switch, server computer, and/or any combination thereof .
  • a public network e.g., the Internet
  • a private network e.g., a local area network (LAN), a wide area network (WAN)
  • a wired network e.g., an Ethernet network
  • a wireless network e.g., an 802.11 network, a Wi-Fi network
  • cellular network e.g., Long Term Evolution (LTE) network
  • frame relay network e.g., Long
  • the network 140 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, a wireless local area network (WLAN), a metropolitan area network (MAN), a public switched telephone network (PSTN), a Bluetooth TM network, ZigBee TM network, Near Field Communication (NFC) network, etc., or any combination thereof.
  • network 140 may include one or more network access points.
  • network 140 may include wired and/or wireless network access points, such as base stations and/or Internet exchange points, through which one or more components of flow detection system 100 may connect to network 140 to exchange data and and/or information.
  • Storage device 150 may store data, instructions and/or any other information.
  • storage device 150 may store data obtained from scanning device 110 , terminal device 130 and/or processing device 120 .
  • storage device 150 may store data and/or instructions that processing device 120 may execute or use to perform the exemplary methods/systems described herein.
  • the storage device 150 may include mass storage, removable storage, volatile read-write memory, read-only memory (ROM), etc., or any combination thereof.
  • Exemplary mass storage may include magnetic disks, optical disks, solid state disks, and the like.
  • Exemplary removable storage may include flash drives, floppy disks, optical disks, memory cards, compact disks, magnetic tape, and the like.
  • Exemplary volatile read-write memory may include random access memory (RAM).
  • Exemplary RAMs may include dynamic random access memory (DRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), static random access memory (SRAM), thyristor random access memory (T-RAM), and zero Capacitive Random Access Memory (Z-RAM), etc.
  • Exemplary ROMs may include masked read only memory (MROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), optical disc only Read-only memory (CD-ROM) and digital versatile disk read-only memory, etc.
  • the storage device 150 can be executed on a cloud platform.
  • the cloud platform can include private clouds, public clouds, hybrid clouds, community clouds, distributed clouds, internal clouds, multi-layer clouds, etc., or any combination thereof.
  • the storage device 150 may be connected to the network 140 to communicate with one or more other components of the flow detection system 100 (eg, the scanning device 110 , the processing device 120 , the terminal device 130 ). One or more components of flow detection system 100 may access data or instructions stored in storage device 150 via network 140 . In some embodiments, the storage device 150 may be directly connected to or communicated with one or more other components of the flow rate detection system 100 (eg, the scanning device 110 , the processing device 120 , the storage device 150 , the terminal device 130 ). In some embodiments, storage device 150 may be part of processing device 120 .
  • Fig. 2 is an exemplary block diagram of a flow rate detection system according to some embodiments of the present specification.
  • the processing device 120 may include an image data acquisition module 210 , a parameter determination module 220 and/or a first flow rate determination module 230 , a second flow rate determination module 240 and/or a flow rate correction module 250 .
  • the image data acquisition module 210 can be used to acquire image data.
  • the image data may be data obtained by scanning in B mode.
  • the image data acquisition module 210 may utilize full aperture emission to acquire image data.
  • full aperture transmission may include full aperture transmission in an unfocused wave transmission mode.
  • the image data acquisition module 210 can group array elements of the scanning probe to obtain multiple groups of array elements. Wherein, each group of array elements in the multiple groups of array elements may include one or more array elements.
  • the scanning probe may include any one of a linear array probe, a convex array probe, and/or a phased array probe.
  • the image data may include multiple sets of image data. In some embodiments, each set of image data in the multiple sets of image data may correspond to a set of array elements in the multiple sets of array elements. In some embodiments, a set of image data may be obtained through demodulation and/or beam synthesis based on reflected signals received by a corresponding set of array elements.
  • the parameter determining module 220 may be used to determine a parameter related to a phase change of at least one detection point based on the image data.
  • parameters related to phase change may include a rate of phase change.
  • the parameter determination module 220 may be configured to perform one or more of the following: determine at least two temporally adjacent image data segments received by each group of array elements in the image data; Determine the phase change rate of at least one detection point corresponding to each group of array elements for at least two adjacent image data segments.
  • the first flow rate determination module 230 can be used to determine the first flow rate of one or more detection points.
  • the first flow velocity determination module 230 may determine the first flow velocity of at least one detection point based on parameters related to phase change, and the positional relationship between at least one detection point, at least one emission point, and multiple reception points.
  • the first flow velocity determining module 230 may determine the first flow velocity of at least one detection point by means of GPU parallel computing.
  • the first flow rate determination module 230 can perform feature matrix for multiple groups of array elements based on parameters related to phase change, and the positional relationship between at least one detection point, at least one emission point, and multiple receiving points. calculating, and integrating the calculation results of multiple groups of array elements to obtain the first flow velocity of at least one detection point.
  • the first flow velocity determination module 230 may be configured to perform one or more of the following: based on the positional relationship between at least one detection point, at least one emission point, and multiple reception points, determine the The resulting spatial displacement vector; based on the combined spatial displacement vector corresponding to each group of array elements, determine the first feature matrix corresponding to each group of array elements; based on the phase change rate of at least one detection point corresponding to each group of array elements, and Corresponding to the first characteristic matrix of each group of array elements, the first flow velocity of at least one detection point is determined.
  • the first flow rate determination module 230 may be configured to perform one or more of the following: based on the phase change rate of at least one detection point corresponding to each group of array elements, and the combination corresponding to each group of array elements Spatial displacement vector, determine the first auxiliary calculation matrix corresponding to each group of array elements; determine the second auxiliary calculation matrix corresponding to each group of array elements based on the first feature matrix corresponding to each group of array elements; The first auxiliary calculation matrix of the array element is accumulated to obtain the third auxiliary calculation matrix; the second auxiliary calculation matrix corresponding to each group of array elements is accumulated to obtain the fourth auxiliary calculation matrix; based on the third auxiliary calculation matrix and the fourth An auxiliary calculation matrix is used to determine the first flow rate of at least one detection point.
  • the first flow velocity determination module 230 may be configured to perform one or more of the following: based on at least one detection point, at least one emission point, and the positional relationship of each receiving point among multiple receiving points, determine the corresponding The spatial displacement vector corresponding to each receiving point; based on the spatial displacement vector corresponding to each receiving point and the weight corresponding to each receiving point, determine a combined spatial displacement vector corresponding to each group of array elements.
  • the first flow velocity determination module 230 can use the weight of each receiving point in each group of array elements to perform weighted summation of the spatial displacement vectors of each receiving point in each group of array elements, so as to obtain the corresponding The resultant spatial displacement vector of the array element.
  • the weight corresponding to each receiving point may be determined based on the distance between each receiving point and at least one detection point.
  • the second flow rate determination module 240 may be used to determine the second flow rate at one or more detection points.
  • the second flow velocity determination module 240 can determine the temporal intensity gradient and/or spatial intensity gradient of at least one detection point based on the image data; and/or based on the temporal intensity gradient and/or spatial intensity of at least one detection point a gradient to determine a second flow rate at at least one detection point.
  • the flow rate correction module 250 can be used to correct the flow rate of one or more detection points to obtain the target flow rate.
  • the flow velocity correction module 250 may perform velocity correction based on the first flow velocity and/or the second flow velocity of at least one detection point, so as to obtain the target flow velocity of at least one detection point.
  • the flow rate correction module 250 may be configured to perform one or more of: determining the difference between the first flow rate and the second flow rate at at least one detection point; Determine the target flow rate of at least one detection point; in response to the difference being greater than a threshold value, determine the target flow rate of at least one adjacent detection point adjacent to at least one detection point, and at least two detection points Interpolation is performed on the target flow velocity to obtain the target flow velocity of at least one detection point.
  • Fig. 3 is an exemplary flowchart of a method for determining a first flow rate of at least one detection point according to some embodiments of the present specification.
  • process 300 may be performed by scanning device 110 and/or processing device 120 .
  • the process 300 may be stored in a storage device (such as the storage device 150 ) in the form of programs or instructions, and the process 300 may be implemented when the flow rate detection system 100 (such as the processing device 120 ) executes the programs or instructions.
  • process 300 may be performed by one or more modules in FIG. 2 . As shown in Figure 3, process 300 may include:
  • image data (eg, image data of a target object) may be acquired.
  • operation 310 may be performed by image data acquisition module 210 .
  • An image can be a carrier that describes the visual information of a target object.
  • Image data may be data used to generate an image.
  • the image data acquisition module 210 can acquire image data through a scanning probe. Specifically, the scanning probe can transmit a scanning signal to a target object or a part thereof, receive a reflected signal from the target object or a part thereof, and obtain image data based on the reflected signal.
  • the target object may be a human body, an organ, a damaged site, a tumor, a body, an object, and the like.
  • the target object may be one or more diseased tissues of the heart, and the image may be a medical image of one or more diseased tissues of the heart.
  • the target object may be an obstacle during flight of an airplane, and the image may be a flight radar map.
  • the target object may be a fluid, and the image may be a flow diagram.
  • the format of the image may include, but is not limited to, Joint Photographic Experts Group (JPEG) image format, Tagged Image File Format (TIFF) image format, Graphics Interchange Format (GIF) image format, Kodak Flash PiX (FPX) image format format, Digital Imaging and Communications in Medicine (DICOM) image format, etc.
  • JPEG Joint Photographic Experts Group
  • TIFF Tagged Image File Format
  • GIF Graphics Interchange Format
  • FPX Kodak Flash PiX
  • DICOM Digital Imaging and Communications in Medicine
  • the types of images may include but not limited to ultrasound images and/or electromagnetic wave images and the like.
  • the scanning probe may include, but not limited to, one or a combination of ultrasonic probes and/or radar probes.
  • the scanning signal may include, but not limited to, one or a combination of ultrasonic waves and/or electromagnetic waves.
  • the reflected signal may include, but not limited to, one or a combination of ultrasonic reflected signals and/or electromagnetic wave reflected signals.
  • the image data corresponding to the ultrasonic image may be data obtained by scanning in B mode.
  • the ultrasound image obtained by B-mode scanning may be a two-dimensional ultrasound image in which the amplitude of the ultrasound reflection signal corresponding to one ultrasound transmission is represented by brightness.
  • a scanning probe may include array elements.
  • An array element may be a component on a scanning probe for transmitting a scanning signal and/or receiving a reflected signal.
  • the scanning probe may include any one or combination of linear array probes, convex array probes and/or phased array probes.
  • the array elements of the linear array probe, the convex array probe and/or the phased array probe can be respectively arranged in a straight line segment (as shown in Figure 4a), an arc segment (as shown in Figure 4b) and/or a square array (not shown ).
  • the array elements of the scanning probe may include piezoelectric materials, for example, the array elements of the ultrasound probe and/or radar probe may be barium titanate, lead titanate, lead zirconate titanate, and the like.
  • the scanning probe can include array elements of various frequencies and a control circuit corresponding to each array element, and the scanning probe can excite the array elements at different positions through pulse signals to generate scanning signals of different frequencies .
  • the scanning ultrasonic probe can convert electrical signal pulses into ultrasonic signals through the array element to transmit to the target object or a part thereof, and can also convert the reflected ultrasonic signal of the target object or a part thereof into electrical signals (ie, image data).
  • each control circuit can activate one array element.
  • the scanning probe can send each pulse signal to the corresponding control circuit, and each control circuit excites the corresponding array element based on the pulse signal, so as to transmit scanning signals of different or the same frequency at different or the same time.
  • the image data acquisition module 210 can acquire image data by using full array element (also known as full aperture) emission.
  • full array element also known as full aperture
  • the full-aperture transmission may be a transmission mode in which all array elements of the scanning probe are used to transmit scanning signals. It can be understood that the image data acquisition module 210 may acquire an image frame corresponding to all detection areas (ie, the target object or a part thereof) based on one full-aperture emission. In some embodiments, the image data acquisition module 210 may acquire an image sequence (or video) corresponding to the detection area (ie, the target object or a part thereof) based on multiple image frames corresponding to multiple full-aperture shots.
  • full aperture transmission may include full aperture transmission in an unfocused wave transmission mode, such as a divergent wave transmission mode and/or a plane wave transmission mode.
  • the diverging wave transmission mode may be a transmission mode in which the focal point is above the scanning probe when transmitting.
  • the focal point A when the divergent wave is emitted, the focal point A is above the scanning probe, and all array elements a-b on the scanning probe can emit scanning signals.
  • the plane wave emission mode may be an emission mode in which the focal point is at infinity at the time of emission.
  • the focal point when the plane wave is emitted, the focal point is at infinity, and all array elements c-d on the scanning probe can emit scanning signals.
  • the image data acquiring module 210 may divide multiple full-aperture shots in the diverging wave shot mode into multiple shot groups, and each shot group may include at least two adjacent full-aperture shots. In some embodiments, the focal positions corresponding to the full aperture shots in each shot group may be the same.
  • the image data acquisition module 210 may divide 40 full-aperture shots in the divergent wave mode into 20 shot groups, each shot group includes two adjacent full-aperture shots: the first shot group includes the first full-aperture shot launch, the second full-aperture launch, and the focal positions corresponding to the first and second full-aperture launches are both the first focal point; the second launch group includes the third full-aperture launch and the fourth full-aperture launch, and The focal positions corresponding to the 3rd and 4th full-aperture shots are both the second focal point; ...; the 20th shot group includes the 39th full-aperture shot, the 40th full-aperture shot, and the 39th and 40th shots The focus position corresponding to the full-aperture emission is the twentieth point.
  • the image data acquisition module 210 can group the array elements of the scanning probe to obtain multiple array elements, and each array element group in the multiple array array elements can include one or more array elements.
  • each group of array elements may include the same number of array elements.
  • the image data acquisition module 210 may divide the 128 array elements of the scanning probe into 16 array elements, and each array array may include 8 array elements.
  • any two groups of array elements may include different numbers of array elements.
  • the image data acquisition module 210 may reduce the number of cells in the group of cells corresponding to the non-interest region, and increase the number of cells in the group of cells corresponding to the region of interest.
  • the center of the detection area is the region of interest, and the positions on both sides are non-interest regions.
  • the image data acquisition module 210 can divide the 128 array elements of the scanning probe into 8 groups of array elements, and the number of array elements included in the 8 groups of array elements It can be 8, 8, 8, 8, 16, 16, 16, 16, 8, 8, 8, 8 in sequence.
  • the image data may include multiple sets of image data. Each set of image data in the multiple sets of image data may correspond to a set of array elements in the multiple sets of array elements.
  • the array element group corresponding to each set of image data may be the array element group receiving the reflection signal corresponding to the image data set.
  • the first group of array elements can receive corresponding reflection signals, and the image data acquisition module 210 can generate a set of images based on the reflection signals received by the first group of array elements.
  • image data (for example, the first group of image data)
  • the first group of array elements may be an array element group corresponding to the group of image data.
  • the second group of array elements, the third group of array elements, ... the 16th group of array elements may correspond to the second group of image data, the third group of image data, ... the 16th group of image data.
  • each group of image data corresponding to each pixel group may correspond to an image area (for example, a part of the image).
  • the image data acquisition module 210 may generate a first image area (for example, an image of a detection area within the range of CK and CE) based on the first group of image data.
  • the image data acquisition module 210 can generate a second image area, a third image area, ... a sixteenth image area based on the second set of image data, the third set of image data, ... the sixteenth set of image data, respectively.
  • the first image area, the second image area, the third image area, . . . the sixteenth image area may be different from each other to form a detection area. It can be understood that the image data acquisition module 210 can spatially divide the image data corresponding to the detection area into multiple groups of image data corresponding to multiple image areas based on the element grouping.
  • each set of image data may be obtained through demodulation and/or beam synthesis based on reflection signals received by a corresponding set of array elements.
  • Demodulation can be the process of restoring digital frequency band signals to digital baseband signals.
  • Beam synthesis may be a process of weighting and combining multiple reflected signals.
  • the image data acquisition module 210 may perform weighted synthesis on the reflection signals received by two or more array elements in each array element group, so as to determine the corresponding image of the multiple reflection signals received by the array element group data group.
  • the image data acquisition module 210 can generate the first group of image data, the second group of image data based on the reflected signals received by the first group of array elements, the second group of array elements, ... the 16th group of array elements Data, ... the 16th group of image data.
  • a parameter related to a phase change of at least one detection point is determined based on the image data.
  • operation 320 may be performed by parameter determination module 220 .
  • the detection point may be a spatial point on the detection area (ie, the target object or a part thereof). As shown in Fig. 5a, the detection point may be a spatial point D on the detection area.
  • the parameter related to the phase change may be a parameter characterizing the time-dependent change of the phase of the reflected signal returned from the detection point.
  • parameters related to phase change may include a rate of phase change.
  • the phase change rate may be the change of the phase of the reflected signal returned by the detection point in unit time. It can be understood that the reflected signal can be affected by the direction of the scanning signal and/or the movement of the detection point. Therefore, in order to obtain the moving speed of the detection point based on the phase change rate, the phase change rate can be the reflection signal corresponding to the scanning signal in the same direction The change of the phase of the unit time.
  • the direction of the scan signal can be determined based on the location of the transmit focus.
  • the focal points corresponding to the same scan signal may be the same.
  • the phase change rate in the plane wave mode with the focal point at infinity may be the change in phase of the reflected signal corresponding to any two adjacent transmitted scan signals within a unit time.
  • the phase change rate in the divergent wave mode may be a phase change per unit time of the reflected signal corresponding to any two adjacent transmit scan signals in the same transmit group.
  • the phase change rate may be the phase change in unit time of the reflected signal corresponding to the first full-aperture launch and the second full-aperture launch in the first launch group, wherein the first and second full-aperture launches
  • the corresponding focus positions may all be point C.
  • the parameter determination module 220 may determine at least two temporally adjacent image data segments received by each group of array elements in the image data.
  • the image data segment may be a part of the image data corresponding to each image frame.
  • the processing device may generate an image frame based on the image data corresponding to the scanning signal emitted by the scanning probe, and then generate an image based on multiple image frames.
  • each image frame can be acquired based on the image data corresponding to one scan signal, and each image can be acquired based on 40 image frames.
  • the full-aperture transmission mode based on the first scan signal, the second scan signal, ... the 40th scan signal, the first image frame, the second image frame, ... the 40th scan signal can be generated respectively further, based on the transmission sequence of the scanning signal corresponding to each image frame, composite the first image frame, the second image frame, ... the 40th image frame, and obtain an image.
  • each group of image data corresponding to each group of array elements can correspond to an image area, and further, based on multiple image areas corresponding to multiple array element groups, the parameter determination module 220 can set each image frame corresponding to The image data is divided into a plurality of image data segments.
  • the image data corresponding to the first image frame can be divided into corresponding 1-1 image data segments respectively based on the first image area, the second image area, ... the 16th image area (ie, the first The first image data segment of the frame image), the 1-2th image data segment (that is, the second image data segment of the first frame image), .... The 1-16th image data segment (that is, the first frame image The sixteenth image data segment); ...
  • the image data corresponding to the 40th image frame can be divided into corresponding 40-1 image data based on the first image area, the second image area, ... the 16th image area segment (i.e. the first image data segment of the 40th frame image), the 40-2 image data segment (i.e. the second image data segment of the 40th frame image), .... the 40-16th image data segment ( That is, the sixteenth image data segment of the 40th frame image).
  • each image data segment may correspond to a part of the image data corresponding to each image frame, and temporally, it may correspond to a part of each image data group.
  • multiple image data segments may be combined based on the spatial relationship first, multiple image frames may be acquired, and then the image data of the detection area may be acquired based on time sequence.
  • the image data of the detection area corresponding to the first image frame can be obtained, similarly, based on the 16 image data corresponding to the 40th image frame
  • the image data of the image area is obtained by obtaining the image data of the entire detection area corresponding to the 40th image frame; further, based on 40 image frames (each image frame corresponds to the image data of the entire detection area), the image data of the entire detection area can be obtained image data.
  • the first image data segment corresponding to the first image area can be obtained based on the first image data segment of each of the 40 image frames corresponding to the first image area, similarly, based on The 16th image data segment of each of the 40 image frames corresponding to the 16th image area is used to obtain the 16th group of image data corresponding to the 16th image area; further, 16 groups of images based on 16 image areas can be obtained Image data, to obtain the image data of the entire detection area.
  • the at least two temporally adjacent image data segments received by each group of array elements may be at least two sequentially adjacent image data segments of the transmitted scan signal received by each group of array elements and having the same focus position part.
  • the at least two image data segments received by each group of array elements that are adjacent in time and have the same focus position may be two consecutive image data segments corresponding to the scanning signals in the same emission group in the diverging wave emission mode, such as, The first image data segment received by the first group of array elements when the emission focus is C and the 2nd image data segment
  • the at least two image data segments received by each group of array elements that are temporally adjacent and have the same focus position may be a plurality of continuous image data segments in the plane wave transmission mode, such as the second image data segment received by the first group of array elements.
  • the parameter determination module 220 may determine the phase change rate of at least one detection point corresponding to each group of array elements based on at least two temporally adjacent image data segments.
  • each set of image data may correspond to a part of the image of the detection area (ie, an image area), and each segment of image data may include image data of at least one detection point.
  • the image data segment (or image data group) corresponding to the first group of array elements may include image data of detection points within the range of CK and CE.
  • the parameter determination module 220 can determine a phase change rate of at least one detection point of each group of array elements based on two temporally adjacent image data segments by formula (1):
  • the second frame and the first image data segment received by the first group of array elements based on the scanning signal corresponding to the same focal point C are respectively and Then during the transmission time interval of the scanning signal corresponding to the second frame received by the first group of array elements and the first image data segment, the phase change rate of at least one detection point received by the first group of array elements is
  • the parameter determination module 220 may determine multiple phase change rates of at least one detection point of each group of array elements based on multiple temporally adjacent image data segments. Specifically, the parameter determination module 220 can first obtain the corresponding multiple phase change rates through formula (1) based on any two adjacent image data segments among the temporally adjacent multiple image data segments, and then based on the multiple phase Change rate, get the final phase change rate. In some embodiments, the parameter determination module 220 may calculate the average value, weighted average value, variance, etc. of multiple phase change rates to obtain a final phase change rate.
  • the received second image data segment The third image data segment and the 4th image data segment
  • the parameter determination module 220 can be based on the second image data segment and the 3rd image data segment Get the phase change rate Based on the 3rd image data segment and the 4th image data segment Get the phase change rate Then calculate the phase change rate and the phase change rate The average value of , to obtain the phase change rate of at least one detection point of the first group of array elements
  • the first flow velocity of the at least one detection point may be determined based on the parameters related to the phase change and the positional relationship between the at least one detection point and the at least one transmitting point and multiple receiving points.
  • operation 330 may be performed by the first flow rate determination module 230 .
  • the transmitting point may be any array element in the array element group that transmits the scanning signal to the detection point.
  • the second array element in the third group of array elements that transmits the scanning signal to the detection point D may be the emission point 3 - 2 corresponding to the detection point D.
  • the receiving point may be any element in the group of elements that receives the reflected signal from the detection point. As shown in FIG. 5 a , the fifth array element in the first group of array elements that receives the scanning signal from the detection point D can detect the receiving points 1-5 corresponding to the point D.
  • the first flow velocity may be the flow velocity at the detection point determined based on the phase change of the reflected signal.
  • the first flow rate determination module 230 can perform feature matrix for multiple groups of array elements based on parameters related to phase change, and the positional relationship between at least one detection point, at least one emission point, and multiple receiving points. calculating, and integrating the calculation results of multiple groups of array elements to obtain the first flow velocity of at least one detection point.
  • Fig. 7 is a first flow rate method for determining at least one detection point based on parameters related to phase changes and the positional relationship between at least one detection point and at least one emission point and multiple receiving points according to some embodiments of the present specification
  • process 700 may be performed by scanning device 110 and/or processing device 120 .
  • the process 700 can be stored in a storage device (such as the storage device 150 ) in the form of programs or instructions, and the process 700 can be implemented when the flow rate detection system 100 (such as the processing device 120 ) executes the programs or instructions.
  • process 700 may be performed by first flow rate determination module 230 .
  • process 700 may include one or more of the following operations.
  • a resultant spatial displacement vector corresponding to each group of array elements may be determined based on the positional relationship between at least one detection point, at least one transmitting point, and multiple receiving points.
  • each transmission point can transmit a scanning signal to at least one detection point.
  • each emission point can emit a scanning signal to a detection point in a corresponding emission direction in the detection area.
  • the emission direction of each emission point can be determined based on the corresponding focus position when the emission point emits the scanning signal.
  • the emission direction can be directed from the focus position to the emission point position.
  • the emission direction of the emission point 3-2 (the second element in the third group of elements) in the divergent wave mode can be directed from the emission focus C to the detection point D (that is, direction), the emission point 3-2 can be sent to the detection area
  • the detection points in the direction that is, the detection points on the line segment FG
  • the emission direction of the reflection point in the plane wave mode can be the vertical direction.
  • each receiving point may receive a reflection signal obtained from at least one detection point based on the scanning signal of at least one detection point. Specifically, each receiving point can receive reflected signals from detection points in the corresponding receiving direction in the detection area. In some embodiments, the receiving direction of each receiving point can be determined based on the location of the receiving point and the location of the detection point. For example, the receiving direction may point from the receiving point position to the detection point position.
  • the receiving direction of the receiving point 1-5 (the fifth array element in the first group of array elements) can be directed from the position J of the receiving point 1-5 to the detection point D (that is, direction), receiving points 1-5 can be detected from the detection area
  • the detection points in the direction (that is, the detection points on the line segment HI) receive the reflected signal.
  • the total spatial displacement vector corresponding to each group of array elements may be the unit flow velocity of at least one detection point detected by each group of array elements, and may represent the flow velocity direction of at least one detection point detected by each group of array elements.
  • the resultant spatial displacement vector corresponding to each group of array elements may be a resultant vector of spatial displacement vectors corresponding to all array elements (ie receiving points) in each group of array elements.
  • the first flow velocity determination module 230 may determine a spatial displacement vector corresponding to each receiving point based on at least one detection point, at least one transmitting point, and/or the positional relationship of each receiving point among the plurality of receiving points .
  • the spatial displacement vector of a receiving point corresponding to a certain detection point may be a combined vector of a unit vector of the detection point in the receiving direction of the receiving point and a unit vector of the detection point in the transmitting direction.
  • the spatial displacement vector of receiving points 1-5 corresponding to detecting point D can be The unit vector on and in the emission direction The resultant vector of unit vectors on
  • the first flow velocity determining module 230 may determine a spatial displacement vector corresponding to each receiving point corresponding to a detection point based on formula (2):
  • x is the position of the detection point
  • x T is the position of the transmitting focus
  • the first flow velocity determination module 230 may determine the total spatial displacement vector corresponding to each group of array elements based on the spatial displacement vector corresponding to each receiving point and the weight corresponding to each receiving point .
  • the weight of each receiving point may represent the importance of each receiving point to the at least one detection point. In some embodiments, for different detection points, the weight of each receiving point may be different.
  • the weight of each receiving point may be determined based on the distance between each receiving point and at least one detection point.
  • the weight of each receiving point may be positively correlated with the distance between each receiving point and at least one detection point.
  • the closer the receiving point is to a certain detection point the smaller the weight of the receiving point relative to the detection point.
  • the distance between receiving point 1-1, receiving point 1-5, ... receiving point 2-5 and detection point D is getting closer and closer, then receiving point 1-1, receiving point 1-5, ... receive Points 2-5 correspond to smaller and smaller weights.
  • the weight of each receiving point is set to be positively correlated with the distance between each receiving point and at least one detection point, which can improve the resolution of the image.
  • the weight of each receiving point may be negatively correlated with the distance between each receiving point and the at least one detection point.
  • the closer the receiving point is to a certain detection point the greater the weight of the receiving point relative to the detection point.
  • the distance between receiving point 1-1, receiving point 1-5, ... receiving point 2-5 and detection point D is getting closer and closer, then receiving point 1-1, receiving point 1-5, ... receive Points 2-5 correspond to increasing weights.
  • the first flow rate determination module 230 may determine the weight of each receiving point based on the reciprocal value of the distance between each receiving point and the detection point.
  • the first flow velocity determining module 230 may use the ratio of the reciprocal value of the distance between each receiving point and the detection point to the sum of the reciprocal values of the distances between all receiving points and the detection point as the weight of each receiving point.
  • the distances from receiving point 1-1, receiving point 1-2, ... receiving point 16-8 to detection point D are 20mm, 25mm, ... 50mm
  • the reciprocals of the corresponding distances are 0.05, 0.04, ... 0.02
  • the sum of the reciprocals of distances is 3.2
  • the weight of each receiving point is negatively correlated with the distance between each receiving point and at least one detection point, which can reduce image artifacts.
  • the first flow velocity determination module 230 can use the weight of each receiving point in each group of array elements to perform weighted summation of the spatial displacement vectors of each receiving point in each group of array elements, so as to obtain the corresponding The combined spatial displacement vector of the array elements.
  • the first flow velocity determination module 230 can determine the total spatial displacement vector corresponding to each group of array elements through formula (3):
  • p k is the total spatial displacement vector of the kth array element, is the weight of the mth receiving point of the kth array element, is the spatial displacement vector of the mth receiving point of the kth array element.
  • the first flow velocity determination module 230 may be based on the spatial displacement vector corresponding to the detection point D of the receiving point 1-1, receiving point 1-2, ..., receiving point 1-16 in the first array element and weight Determine the total spatial displacement vector p 1 corresponding to the detection point D of the first group of array elements.
  • the first flow velocity determination module 230 can determine the total spatial displacement vectors of the second group of array elements, the third group of array elements, ... the 16th array element respectively corresponding to the detection point D: p 2 , p 3 , ... p 16 .
  • a first feature matrix corresponding to each group of array elements may be determined based on the resultant spatial displacement vector corresponding to each group of array elements.
  • the first characteristic matrix of each group of array elements can be based on the phase change of the reflected signal corresponding to the two adjacent scan signals received by each group of array elements on the horizontal axis and the vertical axis of the waveform diagram coordinate system of the reflected signal. The obtained matrix.
  • the first flow velocity acquisition module 230 can determine the components of the unit phase change rate on the X-axis and Z-axis respectively based on the total spatial displacement vector corresponding to each group of array elements.
  • the X axis and the Z axis may be parallel to the horizontal direction and the vertical direction of the detection area respectively.
  • the first flow velocity obtaining module 230 may obtain the first special matrix of each group of array elements based on formula (4):
  • ⁇ 0 is the angular frequency of the transmitted pulse
  • c is the speed of the scanning signal
  • p kx T and p kz T respectively
  • the X-axis component and the Z-axis component of the transpose p T of the total spatial displacement vector of the kth array element can be determined based on the positional relationship between at least one detection point, at least one transmitting point, and multiple receiving points.
  • the first flow velocity of at least one detection point may be determined based on the phase change rate of at least one detection point corresponding to each group of array elements, and/or the first characteristic matrix corresponding to each group of array elements.
  • the relationship between the first flow velocity of at least one detection point and the phase change rate can be expressed by formula (5):
  • v is the first flow velocity of at least one detection point
  • p T is the transpose of the space displacement vector
  • the first flow velocity v can be decomposed into a component v x on the horizontal axis and a component v z on the vertical axis, and the right-hand side of (5) can be decomposed into the first matrix and the first matrix based on formula (6).
  • the first flow velocity determination module 230 can determine the phase change rate corresponding to at least one detection point corresponding to each group of array elements, and/or the first characteristic matrix corresponding to each group of array elements, to determine the The first auxiliary calculation matrix of the array element.
  • the first auxiliary calculation matrix may be the product of the transpose matrix of the first characteristic matrix of each array element and the phase change rate: a k T b k .
  • the first flow rate determination module 230 may determine a second auxiliary calculation matrix corresponding to each group of array elements based on the first characteristic matrix corresponding to each array array element.
  • the second auxiliary matrix may be the transpose of the first characteristic matrix of each array element and the product of the first characteristic matrix: a k T a k .
  • the first flow velocity determination module 230 may accumulate the first auxiliary calculation matrices corresponding to each group of array elements to obtain a third auxiliary calculation matrix.
  • the third auxiliary matrix can be determined based on formula (8):
  • a T B ⁇ k a k T b k (8)
  • the first flow velocity determination module 230 may accumulate the second auxiliary calculation matrix corresponding to each group of array elements to obtain a fourth auxiliary calculation matrix.
  • the fourth auxiliary matrix can be determined based on formula (9):
  • the first flow velocity determining module 230 may determine the first flow velocity of at least one detection point based on the third auxiliary calculation matrix and the fourth auxiliary calculation matrix.
  • the first flow rate can be determined based on equation (10):
  • the first flow velocity determining module 230 may determine the first flow velocity of at least one detection point by means of GPU parallel computing.
  • the GPU can calculate in parallel the phase change rate, the combined spatial displacement vector, the first feature matrix, and the first auxiliary calculation matrix corresponding to each group of array elements, thereby improving calculation efficiency.
  • Fig. 8 is an exemplary flow chart of a second flow rate method for determining at least one detection point according to some embodiments of the present specification.
  • process 800 may be performed by scanning device 110 and/or processing device 120 .
  • the process 800 may be stored in a storage device (such as the storage device 150 ) in the form of programs or instructions, and the process 800 may be implemented when the flow rate detection system 100 (such as the processing device 120 ) executes the programs or instructions.
  • process 800 may be performed by second flow rate determination module 240 . As shown in FIG. 8, process 800 may include one or more of the following operations:
  • a temporal intensity gradient and/or a spatial intensity gradient of at least one detection point may be determined based on the image data.
  • the optical flow field can be a projected image of the motion field on a two-dimensional plane. It can be understood that the motion field can be used to describe the movement, and the optical flow field can reflect the gray distribution of different projection images in the projection image sequence, thereby transferring the motion field in the three-dimensional space to the two-dimensional plane. Therefore, in an ideal state, the optical flow field, Can correspond to sports field.
  • the optical flow may be the instantaneous moving speed of the projection point corresponding to the detection point on the projection image.
  • the optical flow can be represented by the change trend of the gray value of the pixel in the optical flow field.
  • the length and direction of the arrows in the optical flow field can represent the magnitude and direction of the optical flow at each point, respectively.
  • the second flow velocity determination module 240 can obtain the optical flow field based on the image data. Specifically, the second flow velocity determination module 240 may determine all optical flows based on changes in the gray values of all pixels in multiple consecutive image frames, thereby obtaining an optical flow field.
  • the second flow velocity determination module 240 can obtain the temporal intensity gradient and/or the spatial intensity gradient based on the optical flow field.
  • the temporal intensity gradient can be the rate at which the gray value of a pixel in the optical flow field changes based on time, and can be represented by the partial derivative of the pixel in the projected image relative to the time (t) direction.
  • the second flow rate determination module 240 can determine the time intensity gradient of at least one detection point based on formula (11):
  • t is the inter-frame interval time corresponding to multiple optical flow field frames that make up the optical flow field.
  • the spatial intensity gradient can be the gradient of the gray value of the pixel in the optical flow field based on the position change, which can be expressed by the partial derivative of the pixel in the projected image along the X axis and the Z axis.
  • the second flow velocity determination module 240 may determine the spatial intensity gradient of at least one detection point based on formula (12):
  • I is the optical flow field
  • x is the unit distance of the optical flow field on the horizontal axis
  • z is the unit distance of the optical flow field on the vertical axis.
  • the second flow velocity of at least one detection point may be determined based on the temporal intensity gradient and/or the spatial intensity gradient of the at least one detection point.
  • the second flow velocity may be the flow velocity of the detection point determined based on the time variation and/or spatial variation of the pixel point intensity of the image, and may be represented by the instantaneous velocity of the optical flow.
  • represents the second-order infinitesimal term, which can be ignored.
  • the second flow rate determination module 240 can subtract I(x, z, t) from both ends of (14) at the same time, and then divide by dt at the same time to obtain the formula (15):
  • the relationship between the second flow velocity, the spatial intensity gradient and the temporal intensity gradient of at least one detection point can be expressed by formula (16):
  • formula (16) can be transformed into formula (17):
  • i represents the i-th image frame, and Respectively represent the components of the position change rate of the optical flow corresponding to the detection point of the i-th image frame along the X-axis and the Z-axis, Indicates the temporal rate of change of the optical flow corresponding to the detection point.
  • M may represent the spatial intensity gradient matrix corresponding to at least multiple consecutive image frames
  • v may represent the second flow velocity at the detection point
  • N may represent the temporal intensity gradient matrix corresponding to at least multiple consecutive image frames.
  • the second flow rate determination module 240 can obtain the second flow rate (19) of the detection point based on the formula (18):
  • Some embodiments of the present specification use the optical flow method to calculate the second flow velocity of at least one detection point based on the image data, and the flow velocity of the detection point in the three-dimensional sports field can be converted to a two-dimensional sports field for calculation and acquisition.
  • Fig. 9 is an exemplary flowchart of a flow rate correction method according to some embodiments of the present specification.
  • process 900 may be performed by scanning device 110 and/or processing device 120 .
  • the process 900 may be stored in a storage device (such as the storage device 150 ) in the form of programs or instructions, and the process 900 may be implemented when the flow rate detection system 100 (such as the processing device 120 ) executes the programs or instructions.
  • process 900 may be performed by flow rate correction module 250 .
  • process 900 may include one or more of the following operations.
  • the first flow velocity can be the flow velocity of the detection point determined based on the phase change of the reflected signal of the detection point;
  • the second flow velocity can be the flow velocity of the detection point determined based on the temporal variation and spatial variation of the pixel intensity of the image .
  • the first flow rate and the second flow rate use the same image data and are acquired based on the time resolution (phase change) and spatial resolution (pixel intensity) of the system respectively, so that mutual verification and/or correction can be performed.
  • a difference between the first flow rate and the second flow rate at at least one test point may be determined.
  • the difference between the first flow velocity and the second flow velocity at the at least one detection point may characterize the difference in velocity and direction between the first flow velocity and the second flow velocity at the at least one detection point.
  • the flow rate correction module 250 may obtain the difference between the first flow rate and the second flow rate based on the modulus of the vector difference between the first flow rate and the second flow rate at at least one detection point. For example, the first flow rate at detection point Q is The second velocity is Then the velocity difference between the first flow rate and the second flow rate can be
  • a target flow rate of at least one detection point may be determined based on the first flow rate and the second flow rate of the at least one detection point.
  • the threshold value may be a value used to evaluate the magnitude of the difference between the first flow rate and the second flow rate at at least one detection point.
  • the threshold can be manually set in advance.
  • the threshold may include a first threshold and a second threshold.
  • the first threshold may be a value that evaluates a magnitude of a difference between the first flow rate and the second flow rate at at least one detection point.
  • the second threshold value may be a value for evaluating the magnitude of a difference in direction between the first flow velocity and the second flow velocity at at least one detection point.
  • the first threshold is 5
  • the second threshold is 20°
  • the difference between the first flow rate and the second flow rate at the detection point Q is 2
  • the direction difference between the first flow rate and the second flow rate at the detection point Q is 10
  • the threshold may also include a third threshold.
  • the third threshold may be a value for evaluating the magnitude of the difference in velocity and direction between the first flow velocity and the second flow velocity at at least one detection point. Continuing the above example, the third threshold is 6, and the first flow rate at the detection point Q is and the second flow rate The difference is Then it can be determined that the target flow rate of the detection point Q is the average value of the first flow rate and the second flow rate The direction is the resultant vector of the first flow velocity and the second flow velocity direction.
  • Operation 930 in response to the difference being greater than a threshold, determine the target flow velocity of at least one adjacent detection point adjacent to the at least one detection point, and perform interpolation on the target flow velocity of the at least one adjacent detection point to obtain the at least one detection point target flow rate.
  • the flow velocity correction module can obtain the target flow velocity based on other adjacent detection points. Target flow rate for at least one test point.
  • the target flow rate at the detection point D can be obtained through interpolation.
  • the interpolation may include but not limited to at least one of adaptive interpolation algorithms such as nearest neighbor interpolation, quadratic interpolation, and cubic interpolation.
  • the flow velocity correction module 250 may select at least one adjacent detection point adjacent to at least one detection point based on different interpolation algorithms.
  • the flow velocity correction module 250 may use the target flow velocity of the adjacent detection point closest to the detection point as the target flow velocity of the detection point based on the nearest neighbor interpolation algorithm.
  • the flow velocity correction module 250 may select the most adjacent left and right detection points in the horizontal direction of the detection points, and the most adjacent upper and lower detection points in the longitudinal direction based on the quadratic interpolation algorithm. point as the adjacent detection point of the detection point. Further, the average value of the target flow velocity in the lateral direction of the nearest left detection point and right detection point is obtained as the horizontal target flow velocity of the detection point; the nearest upper detection point and lower detection point are obtained. The average value of the vertical component of the target flow velocity is used as the longitudinal target flow velocity of the detection point; based on the horizontal target flow velocity and the longitudinal target flow velocity of the detection point, the target flow velocity of the detection point is obtained.
  • the possible beneficial effects of the embodiments of this specification include but are not limited to: (1) grouping array elements, and using the positional relationship between the transmitting point (transmitting focus), receiving point (array element) and/or detection point, Combined with the phase change to determine the two-dimensional flow velocity of each detection point, compared with the multi-angle emission mode, the phase change of the reflected signal at multiple angles can be obtained in one emission, and the flow velocity of the target object perpendicular to the emission direction can be detected, improving It improves the system’s utilization rate of data, which is conducive to improving the accuracy of system frame rate and speed evaluation; (2) Using the full-aperture transmission mode can improve imaging efficiency, and at the same time, the divergent wave mode can be used, so that the transmitted scanning signal can point to The same focus position can not only improve the signal-to-noise ratio, but also increase the frame rate of the system, thereby improving the time resolution capability of the system; (3) based on the image data, use the optical flow method to calculate the second flow velocity of at least one detection point
  • aspects of this specification can be illustrated and described by several patentable types or situations, including any new and useful process, machine, product or combination of substances, or their Any new and useful improvements.
  • various aspects of this specification may be entirely executed by hardware, may be entirely executed by software (including firmware, resident software, microcode, etc.), or may be executed by a combination of hardware and software.
  • the above hardware or software may be referred to as “block”, “module”, “engine”, “unit”, “component” or “system”.
  • aspects of this specification may be embodied as a computer product comprising computer readable program code on one or more computer readable media.
  • a computer storage medium may contain a propagated data signal embodying a computer program code, for example, in baseband or as part of a carrier wave.
  • the propagated signal may have various manifestations, including electromagnetic form, optical form, etc., or a suitable combination.
  • a computer storage medium may be any computer-readable medium, other than a computer-readable storage medium, that can be used to communicate, propagate, or transfer a program for use by being coupled to an instruction execution system, apparatus, or device.
  • Program code residing on a computer storage medium may be transmitted over any suitable medium, including radio, electrical cable, fiber optic cable, RF, or the like, or combinations of any of the foregoing.
  • the computer program codes required for the operation of each part of this manual can be written in any one or more programming languages, including object-oriented programming languages such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB.NET, Python etc., conventional procedural programming languages such as C language, Visual Basic, Fortran2003, Perl, COBOL2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages.
  • the program code may run entirely on the user's computer, or as a stand-alone software package, or run partly on the user's computer and partly on a remote computer, or entirely on the remote computer or processing device.
  • the remote computer can be connected to the user computer through any form of network, such as a local area network (LAN) or wide area network (WAN), or to an external computer (such as through the Internet), or in a cloud computing environment, or as a service Use software as a service (SaaS).
  • LAN local area network
  • WAN wide area network
  • SaaS service Use software as a service
  • numbers describing the quantity of components and attributes are used. It should be understood that such numbers used in the description of the embodiments use the modifiers "about”, “approximately” or “substantially” in some examples. grooming. Unless otherwise stated, “about”, “approximately” or “substantially” indicates that the stated figure allows for a variation of ⁇ 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that can vary depending upon the desired characteristics of individual embodiments. In some embodiments, numerical parameters should take into account the specified significant digits and adopt the general digit reservation method. Although the numerical ranges and parameters used in some embodiments of this specification to confirm the breadth of the range are approximations, in specific embodiments, such numerical values are set as precisely as practicable.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Controlling Sheets Or Webs (AREA)
  • Handling Of Sheets (AREA)
  • Ink Jet (AREA)
  • Indicating Or Recording The Presence, Absence, Or Direction Of Movement (AREA)

Abstract

一种流速检测方法,包括:获取图像数据;基于图像数据确定至少一个检测点与相位变化有关的参数;基于与相位变化有关的参数,以及至少一个检测点与至少一个发射点、多个接收点的位置关系,确定至少一个检测点的第一流速。

Description

一种流速检测方法、系统和存储介质 技术领域
本说明书涉及流速检测领域,特别涉及一种流速检测方法和系统。
背景技术
流速检测可以基于目标物体的图像数据,检测目标物体的运动速度。目标物体的图像数据可以基于对目标物体同一位置不同时间的回波数据的相位变化获取。然而,由于扫描模式的限制,系统只能检测到沿发射方向的相位变化,为了获取目标物体的准确流速,需要调节扫查信号的发射偏角或者手动调整扫查探头的位置。
因此,希望提供一种流速检测方法和系统,可以自动检测到目标物体在任何方向的流速,同时提高成像的帧频。
发明内容
本说明书的一个方面提供一种流速检测方法,所述方法包括:获取图像数据;基于图像数据确定至少一个检测点与相位变化有关的参数;基于与相位变化有关的参数,以及至少一个检测点与至少一个发射点、多个接收点的位置关系,确定至少一个检测点的第一流速。
本说明书的一个方面提供一种流速检测系统,所述系统包括:至少一个存储介质,其存储有至少一组指令;以及至少一个处理器,被配置为与所述至少一个存储介质通信,其中,当执行所述至少一组指令时,所述至少一个处理器被指示为使所述系统:获取图像数据;基于图像数据确定至少一个检测点与相位变化有关的参数;基于与相位变化有关的参数,以及至少一个检测点与至少一个发射点、多个接收点的位置关系,确定至少一个检测点的第一流速。
本说明书的一个方面提供一种流速检测系统,所述系统包括:图像数据获取模块,用于获取图像数据;参数确定模块,用于基于图像数据确定至少一个检测点与相位变化有关的参数;第一流速确定模块,用于基于与相位变化有关的参数,以及至少一个检测点与至少一个发射点、多个接收点的位置关系,确定至少一个检测点的第一流速。
本说明书的另一个方面提供一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行流速检测方法。
本说明书的一些实施例将阵元进行分组,并利用发射点(发射焦点)、接收点(阵元)和/或检测点之间的位置关系,结合相位变化确定每一个检测点的二维流速,相比多角度发射的模式,一次发射便可以得到多个角度下反射信号的相位变化情况,可以检测出垂直于发射方向的目标物体流速,提高了系统对数据的利用率,有利于提高系统帧频与速度评估的准确性;使用全孔径发射模式,可以提高成像效率,同时可以采用非聚焦波模式,使得发射的扫查信号可以指向同一焦点位置,不仅可以提高信噪比,还可以增加系统的帧频,从而提高系统的时间分辨能力;基于图像数据,使用光流法计算至少一个检测点的第二流速,可以将三维运动场中的检测点的流速转换到二维运动场中计算获取;分别基于系统的时间分辨能力(相位变化)和空间分辨能力(像素点强度)获取第一流速和第二流速,从而可以相互进行验证和/或校正;利用了像素波束合成方法,基于阵元分组的模式使用GPU并行计算,可以提高运算效率,降低硬件及时间成本。
附图说明
本说明书将以示例性实施例的方式进一步描述,这些示例性实施例将通过附图进行详细描述。这些实施例并非限制性的,在这些实施例中,相同的编号表示相同的结构,其中:
图1是根据本说明书的一些实施例所示的流速检测系统的应用场景示意图;
图2是根据本说明书的一些实施例所示的流速检测系统的示例性模块图;
图3是根据本说明书的一些实施例所示的确定至少一个检测点的第一流速的方法的示例性流程图;
图4a是根据本说明书的一些实施例所示的发散波的示例性示意图;
图4b是根据本说明书的一些实施例所示的平面波的示例性示意图;
图5a是根据本说明书的一些实施例所示的发散波模式下的全孔径发射的示例性示意图;
图5b是根据本说明书的一些实施例所示的平面波模式下的全孔径发射的示例性示意图;
图6是根据本说明书的一些实施例所示的图像数据的示例性示意图;
图7是根据本说明书的一些实施例所示的基于与相位变化有关的参数,以及至少一个检测点与至少一个发射点、多个接收点的位置关系,确定至少一个检测点的第一流速方法的示例性流程图;
图8是根据本说明书的一些实施例所示的确定至少一个检测点的第二流速方法的示例性流程图;
图9是根据本说明书的一些实施例所示的流速校正方法的示例性流程图。
具体实施方式
为了更清楚地说明本说明书实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本说明书的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本说明书应用于其它类似情景。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。
应当理解,本说明书中所使用的“系统”、“装置”、“单元”和/或“模组”是用于区分不同级别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其他词语可实现相同的目的,则可通过其他表达来替换所述词语。
如本说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其它的步骤或元素。
本说明书中使用了流程图用来说明根据本说明书的实施例的系统所执行的操作。应当理解的是,前面或后面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各个步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。
图1是根据本说明书的一些实施例所示的流速检测系统的应用场景示意图。
流速检测系统100可以通过实施本说明书披露的方法和/或过程,确定二维流速。
如图1所示,流速检测系统100可以包括:扫查设备110、处理设备120、终端设备130、网络140和/或存储设备150等。
流速检测系统100的组件可以以一种或多种各种方式连接。仅作为示例,如图1所示,扫查设备110可以通过网络140连接到处理设备120。又例如,扫查设备110可以直接连接到处理设备120(如连接扫查设备110和处理设备120的虚线双向箭头所示)。作为进一步的示例,存储设备150可以直接或通过网络140连接到处理设备120。作为进一步的示例,终端设备130可以直接(如连接终端设备130和处理设备120的虚线双向箭头所示)和/或通过网络140与处理设备120连接。
扫查设备110可以对目标物体进行扫查以获取扫描数据。在一些实施例中,扫查设备110可以向目标物体或其一部分发射信号(例如发射超声波),并接收该目标物体或其一部分的反射信号(例如,反射超声波)。在一些实施例中,扫查设备110可以包括扫查设备。扫查设备可以用于发射信号和/或接收信号。扫查设备可以包括但不限于超声探头和雷达探头等。
处理设备120可以处理从扫查设备110、终端设备130和/或存储设备150获得的数据和/或信息。例如,处理设备120可以基于图像数据,确定至少一个检测点的第一流速。又例如,处理设备120可以基于图像数据,确定至少一个检测点的第一流速。再例如,处理设备120可以基于至少一个检测点的第一流速和第二流速,确定至少一个检测点的目标流速。在一些实施例中,处理设备120可以包括中央处理单元(CPU)、数字信号处理器(DSP)、片上系统(SoC)、微控制器单元(MCU)等和/或其任意组合。在一些实施例中,处理设备120可以包括计算机、用户控制台、单个服务器或服务器组等。服务器组可以是集中式或分布式的。在一些实施例中,处理设备120可以是本地的或远程的。例如,处理设备120可以经由网络140访问存储在扫查设备110、终端设备130和/或存储设备150中的信息和/或数据。又例如,处理设备120可以直接连接扫查设备110、终端设备130和/或存储设备150访问存储的信息和/或数据。在一些实施例中,处理设备120可以在云平台上实现。仅作为示例,云平台可以包括私有云、公共云、混合云、社区云、分布式云、云间、多云等,或其任意组合。在一些实施例中,处理设备120或处理设备120的一部分可以集成到扫查设备110中。
终端设备130可以接收指令(例如,超声波检查模式),和/或向用户显示流速检测结果和/或图像。终端设备130可以包括移动设备131、平板计算机132、笔记本计算机133等,或其任意组合。在一些实施例中,终端设备130可以是处理设备120的一部分。
网络140可以包括促进流速检测系统100的信息和/或数据交换的任何合适的网络。在一些实施例中,一个或以上流速检测系统100的组件(例如,扫查设备110、处理设备120、存储设备150、终端设备130)可以通过网络140与流速检测系统100的一个或以上其他组件通信信息和/或数据。例如,处理设备120可以经由网络从终端设备接收用户指令。又例如,扫查设备110可以经由网络140从处理设备120中获取超声波发射参数。网络140可以是和/或包括公共网络(例如互联网)、私有网络(例如局部区域网络(LAN)、广域网(WAN))、有线网络(例如以太网络)、无线网络(例如802.11网络、Wi-Fi网络)、蜂窝网络(例如、长期演进(LTE)网络)、帧中继网络、虚拟专用网(“VPN”)、卫星网络、电话网络、路由器、集线器、交换机、服务器计算机和/或其任何组合。仅作为示例,网络140可以包括电缆网络、有线网络、光纤网络、电信网络、内联网、无线局部区域网络(WLAN)、城域网(MAN)、公共交换电话网(PSTN)、蓝牙 TM网络、紫蜂 TM网络、近场通信(NFC)网络等,或其任意组合。在一些实施例中,网络140可以包括一个或以上网络接入点。例如,网络140可以包括诸如基站和/或互联网交换点之类的有线和/或无线网络接入点,流速检测系统100的一个或以上组件可以通过这些接入点连接到网络140以交换数据和/或信息。
存储设备150可以存储数据,指令和/或任何其他信息。在一些实施例中,存储设备150可以存储从扫查设备110、终端设备130和/或处理设备120获得的数据。在一些实施例中,存储设备150可以存储数据和/或指令,处理设备120可以执行或使用所述数据和指令来执行本申请中描述的示例性方法/系统。在一些实施例中,存储设备150可包括大容量存储器、可移动存储器、易失性读写内存、只读内存(ROM)等或其任意组合。示例性的大容量存储器可以包括磁盘、光盘、固态磁盘等。示例性可移动存储器可以包括闪存驱动器、软盘、光盘、内存卡、压缩盘、磁带等。示例性易失性读写内存可以包括随机存取内存(RAM)。示例性RAM可包括动态随机存取内存(DRAM)、双倍数据速率同步动态随机存取内存(DDR SDRAM)、静态随机存取内存(SRAM)、晶闸管随机存取内存(T-RAM)和零电容随机存取内存(Z-RAM)等。示例性ROM可以包括掩模型只读内存(MROM)、可编程只读内存(PROM)、可擦除可编程只读内存(EPROM)、电可擦除可编程只读内存(EEPROM)、光盘只读内存(CD-ROM)和数字多功能磁盘只读内存等。在一些实施例中,所述存储设备150可在云端平台上执行。仅作为示例,该云平台可以包括私有云、公共云、混合云、社区云、分布云、内部云、多层云等或其任意组合。
在一些实施例中,存储设备150可以连接到网络140以与流速检测系统100的一个或以上其他组件(例如,扫查设备110、处理设备120、终端设备130)通信。流速检测系统100的一个或以上组件可以通过网络140访问存储在存储设备150中的数据或指令。在一些实施例中,存储设备150可以直接连接到流速检测系统100的一个或以上其他组件(例如,扫查设备110、处理设备120、存储设备150、终端设备130)或与之通信。在一些实施例中,存储设备150可以是处理设备120的一部分。
图2是根据本说明书的一些实施例所示的流速检测系统的示例性模块图。
在一些实施例中,处理设备120中可以包括图像数据获取模块210、参数确定模块220和/或第一流速确定模块230、第二流速确定模块240和/或流速校正模块250。
图像数据获取模块210可以用于获取图像数据。在一些实施例中,图像数据可以为B模式下扫查得到的数据。
在一些实施例中,图像数据获取模块210可以利用全孔径发射获取图像数据。在一些实施例中,全孔径发射可以包括非聚焦波发射模式下的全孔径发射。
在一些实施例中,图像数据获取模块210可以对扫查探头的阵元进行分组,以得到多组阵元。其中,多组阵元中的每组阵元可以包括一个或多个阵元。在一些实施例中,扫查探头可以包括线阵探头、凸阵探头、和/或相控阵探头中的任意一种。在一些实施例中,图像数据可以包括多组图像数据。在一些实施例中,多组图像数据中的每组图像数据可以与多组阵元中的一组阵元对应。在一些实施例中,一组图像数据可以是基于对应的一组阵元接收的反射信号进行解调和/或波束合成得到的。
参数确定模块220可以用于基于图像数据确定至少一个检测点与相位变化有关的参数。在一些实施例中,与相位变化有关的参数可以包括相位变化速率。在一些实施例中,参数确定模块220 可以用于执行以下中的一个或多个:确定图像数据中每组阵元接收的在时间上相邻的至少两个图像数据段;基于在时间上相邻的至少两个图像数据段,确定对应于每组阵元的至少一个检测点的相位变化速率。
第一流速确定模块230可以用于确定一个或多个检测点的第一流速。在一些实施例中,第一流速确定模块230可以基于与相位变化有关的参数,以及至少一个检测点与至少一个发射点、多个接收点的位置关系,确定至少一个检测点的第一流速。在一些实施例中,第一流速确定模块230可以采用GPU并行计算的方式确定至少一个检测点的第一流速。
在一些实施例中,第一流速确定模块230可以基于与相位变化有关的参数,以及至少一个检测点与至少一个发射点、多个接收点的位置关系,针对多组阵元分别进行特征矩阵的计算,并将多组阵元的计算结果进行整合,以获得至少一个检测点的第一流速。
在一些实施例中,第一流速确定模块230可以用于执行以下中的一个或多个:基于至少一个检测点与至少一个发射点、多个接收点的位置关系,确定对应于每组阵元的合空间位移向量;基于对应于每组阵元的合空间位移向量,确定对应于每组阵元的第一特征矩阵;基于对应于每组阵元的至少一个检测点的相位变化速率,以及对应于每组阵元的第一特征矩阵,确定至少一个检测点的第一流速。
在一些实施例中,第一流速确定模块230可以用于执行以下中的一个或多个:基于对应于每组阵元的至少一个检测点的相位变化速率,以及对应于每组阵元的合空间位移向量,确定对应于每组阵元的第一辅助计算矩阵;基于对应于每组阵元的第一特征矩阵,确定对应于每组阵元的第二辅助计算矩阵;将对应于每组阵元的第一辅助计算矩阵进行累加,获得第三辅助计算矩阵;将对应于每组阵元的第二辅助计算矩阵进行累加,获得第四辅助计算矩阵;基于第三辅助计算矩阵和第四辅助计算矩阵,确定至少一个检测点的第一流速。
在一些实施例中,第一流速确定模块230可以用于执行以下中的一个或多个:基于至少一个检测点、至少一个发射点、多个接收点中每个接收点的位置关系,确定对应于每个接收点的空间位移向量;基于对应于每个接收点的空间位移向量,以及对应于每个接收点的权重,确定对应于每组阵元的合空间位移向量。在一些实施中,第一流速确定模块230可以利用每组阵元中每个接收点的权重,对每组阵元中每个接收点的空间位移向量进行加权求和,以获得对应于每组阵元的合空间位移向量。在一些实施例中,对应于每个接收点的权重,可以基于每个接收点与至少一个检测点的距离确定。
第二流速确定模块240可以用于确定一个或多个检测点的第二流速。在一些实施例中,第二流速确定模块240可以基于图像数据,确定至少一个检测点的时间强度梯度和/或空间强度梯度;和/或基于至少一个检测点的时间强度梯度和/或空间强度梯度,确定至少一个检测点的第二流速。
流速校正模块250可以用于校正一个或多个检测点的流速,以获得目标流速。在一些实施例中,流速校正模块250可以基于至少一个检测点的第一流速和/或第二流速,进行速度校正,以获得至少一个检测点的目标流速。在一些实施例中,流速校正模块250可以用于执行以下中的一个或多个:确定至少一个检测点的第一流速和第二流速的差异;响应于差异不大于阈值,基于至少一个检测点的第一流速和第二流速,确定至少一个检测点的目标流速;响应于差异大于阈值,确定与至少一个检测点相邻的至少一个相邻检测点的目标流速,并对至少两个检测点的目标流速进行插值,以得到至少一个检测点的目标流速。
图3是根据本说明书的一些实施例所示的确定至少一个检测点的第一流速的方法的示例性流程图。
在一些实施例中,过程300可以由扫查设备110和/或处理设备120来执行。在一些实施例中,过程300可以以程序或指令的形式存储在存储装置(如存储设备150)中,当流速检测系统100(如处理设备120)执行该程序或指令时,可以实现过程300。在一些实施例中,过程300可以由图2中的一个或多个模块执行。如图3所示,过程300可以包括:
操作310,可以获取图像数据(例如目标物体的图像数据)。在一些实施例中,操作310可以由图像数据获取模块210执行。
图像可以是描述目标物体视觉信息的载体。图像数据可以是用于生成图像的数据。在一些实施例中,图像数据获取模块210可以通过扫查探头获取图像数据。具体地,扫查探头可以向目标物体或其一部分发射扫查信号,从该目标物体或其一部分接收反射信号,再基于反射信号获取图像数据。
在一些实施例中,目标物体可以是人体、器官、损伤部位、肿瘤、机体、物体等。例如, 目标物体可以是心脏一个或多个病变组织,图像可以是心脏一个或多个病变组织的医学图像。又例如,目标物体可以是飞机飞行过程中的障碍物,图像可以是飞行雷达图。再例如,目标物体可以是流体,图像可以是流型图。
在一些实施例中,图像的格式可以包括但不限于Joint Photographic Experts Group(JPEG)图像格式、Tagged Image File Format(TIFF)图像格式、Graphics Interchange Format(GIF)图像格式、Kodak Flash PiX(FPX)图像格式、Digital Imaging and Communications in Medicine(DICOM)图像格式等。
在一些实施例中,图像的类型可以包括但不限于超声波图像和/或电磁波图像等。与图像的类型相对应地,扫查探头可以包括但不限于超声探头和/或雷达探头等中的一种或多种的组合。在一些实施例中,扫查信号可以包括但不限于超声波和/或电磁波等中的一种或多种的组合。在一些实施例中,反射信号可以包括但不限于超声波反射信号和/或电磁波反射信号等中的一种或多种的组合。
在一些实施例中,超声波图像对应的图像数据可以为B模式下扫查得到的数据。在一些实施例中,B模式扫查获取的超声波图像可以是将一次超声波发射对应的超声波反射信号的幅度用亮度表示的二维超声波图像。
在一些实施例中,扫查探头可以包括阵元。阵元可以是扫查探头上用于发射扫查信号和/或接收反射信号的部件。
在一些实施例中,基于阵元在扫查探头上的排列方式,扫查探头可以包括线阵探头、凸阵探头和/或相控阵探头中的任意一种或多种的组合。线阵探头、凸阵探头和/或相控阵探头的阵元可以分别是排列成一条直线段(如图4a)、一条弧线段(如图4b)和/或一个方阵(未示出)。
在一些实施例中,扫查探头的阵元可以包括压电材料,例如,超声探头和/或雷达探头的阵元可以是钛酸钡、钛酸铅、锆钛酸铅等。
在一些实施例中,扫查探头上可以包括多种频率的阵元和与每个阵元对应的控制电路,扫查探头可以通过脉冲信号激励不同位置的阵元,产生不同频率的扫查信号。
示例性地,扫查超声探头可以通过阵元将电信号脉冲转换超声信号以向目标物体或其一部分发射,也可以将目标物体或其一部分的反射超声信号转换为电信号(即图像数据)。
在一些实施例中,每个控制电路可以激励一个阵元。具体的,扫查探头可以将每个脉冲信号发送到对应的控制电路,每个控制电路基于脉冲信号激励对应的阵元,从而在不同或相同时间发射不同或相同频率的扫查信号。
在一些实施例中,图像数据获取模块210可以利用全阵元(又称全孔径)发射获取图像数据。
全孔径发射可以是利用扫查探头所有阵元发射扫查信号的发射模式。可以理解,图像数据获取模块210可以基于一次全孔径发射,获取全部检测区域(即目标对象或其一部分)对应的一幅图像帧。在一些实施例中,图像数据获取模块210可以基于多次全孔径发射对应的多幅图像帧,获取检测区域(即目标对象或其一部分)对应的图像序列(或视频)。
在一些实施例中,全孔径发射可以包括非聚焦波发射模式(如发散波发射模式和/或平面波发射模式)下的全孔径发射。
发散波发射模式可以是发射时焦点在扫查探头上方的发射模式。如图4a所示,发散波发射时焦点A在扫查探头上方,扫查探头上所有阵元a-b可以发射扫查信号。平面波发射模式可以是发射时焦点在无穷远处的发射模式。如图4b所示,平面波发射时焦点在无穷远处,扫查探头上所有阵元c-d可以发射扫查信号。
在一些实施例中,图像数据获取模块210可以将发散波发射模式下的多次全孔径发射划分为多个发射组,每个发射组可以包含相邻的至少两次全孔径发射。在一些实施例中国,每个发射组中的全孔径发射对应的焦点位置可以相同。
例如,图像数据获取模块210可以将发散波模式下的40次全孔径发射划分为20个发射组,每个发射组包含两次相邻的全孔径发射:第一发射组包括第1次全孔径发射、第2次全孔径发射,且第1次和第2次全孔径发射对应的焦点位置均为第一焦点;第二发射组包括第3次全孔径发射、第4次全孔径发射,且第3次和第4次全孔径发射对应的焦点位置均为第二焦点;…;第二十发射组包括第39次全孔径发射、第40次全孔径发射,且第39次和第40次全孔径发射对应的焦点位置均为第二十点。
在一些实施例中,图像数据获取模块210可以对扫查探头的阵元进行分组,以得到多组阵元,多组阵元中的每组阵元可以包括一个或多个阵元。
在一些实施例中,每组阵元可以包括相同数量的阵元。示例性地,如图5a所示,图像数据获取模块210可以将扫查探头的128个阵元分为16组阵元,每组阵元可以包括8个阵元。
在一些实施例中,任意两组阵元可以包括不同数量的阵元。示例性地,图像数据获取模块210可以减少非感兴趣区域对应的阵元组的阵元数量,增加感兴趣区域对应的阵元组的阵元数量。例如,检测区域中心位置是感兴趣区域,两侧位置是非感兴趣区域,图像数据获取模块210可以将扫查探头的128个阵元分为8组阵元,8组阵元包括的阵元数量可以依次为8、8、8、8、16、16、16、16、8、8、8、8。
在一些实施例中,图像数据可以包括多组图像数据。多组图像数据中的每组图像数据可以与多组阵元中的一组阵元对应。
在一些实施例中,每组图像数据对应的阵元组可以是接收该组图像数据对应的反射信号的阵元组。如图5a所示,基于第3组阵元发射的扫查信号,第1组阵元可以接收对应的反射信号,图像数据获取模块210可以基于第1组阵元接收的反射信号生成一组图像数据(例如第1组图像数据),则第1组阵元可以是该组图像数据对应的阵元组。类似地,第2组阵元、第3组阵元、…第16组阵元可以分别对应第2组图像数据、第3组图像数据、…第16组图像数据。
在一些实施例中,每个阵元组对应的每组图像数据可以对应一个图像区域(例如图像的一部分)。如图5a所示,图像数据获取模块210可以基于第1组图像数据,生成第1图像区域(例如CK和CE范围内的检测区域的图像)。类似地,图像数据获取模块210可以基于第2组图像数据、第3组图像数据、…第16组图像数据,分别生成第2图像区域、第3图像区域、…第16图像区域。在一些实施例中,第1图像区域、第2图像区域、第3图像区域、…第16图像区域可以互不相同,组成检测区域。可以理解,图像数据获取模块210基于阵元分组,可以从空间上将检测区域对应的图像数据划分为多个图像区域对应的多组图像数据。
在一些实施例中,每组图像数据可以是基于对应的一组阵元接收的反射信号进行解调和/或波束合成得到的。
解调可以是数字频带信号还原成数字基带信号的过程。
波束合成可以是将多个反射信号进行加权合成的过程。在一些实施例中,图像数据获取模块210可以对每个阵元组中两个或多个阵元接收的反射信号进行加权合成,从而确定该阵元组接收的多个反射信号的对应的图像数据组。
示例性地,图像数据获取模块210可以分别基于第1组阵元、第2组阵元、…..第16组阵元接收的反射信号,生成第1组图像数据、第2组图像数据、……第16组图像数据。
操作320,基于图像数据确定至少一个检测点与相位变化有关的参数。在一些实施例中,操作320可以由参数确定模块220执行。
检测点可以是检测区域(即目标物体或其一部分)上的一个空间点。如图5a所示,检测点可以是检测区域上的空间点D。
与相位变化有关的参数可以是表征从检测点返回的反射信号的相位随时间变化的参数。在一些实施例中,与相位变化有关的参数可以包括相位变化速率。
相位变化速率可以是检测点返回的反射信号的相位在单位时间内的变化。可以理解,反射信号可以被扫查信号的方向和/或检测点的运动影响,因此,为了基于相位变化速率获取检测点的运动速度,相位变化速率可以是相同方向的扫查信号对应的反射信号的相位在单位时间内的变化。在一些实施例中,扫查信号的方向可以基于发射焦点的位置确定。在一些实施例中,相同的扫查信号对应的焦点可以相同。
示例性地,焦点在无穷远处的平面波模式下的相位变化速率可以是任意两次相邻发射扫查信号对应的反射信号的相位在单位时间内的变化。
又一示例性地,发散波模式下的相位变化速率可以是同一发射组中任意两次相邻发射扫查信号对应的反射信号的相位在单位时间内的变化。例如,相位变化速率可以是第一发射组中第1次全孔径发射和第2次全孔径发射对应的反射信号的相位在单位时间内的变化,其中,第1次和第2次全孔径发射对应的焦点位置可以均为C点。
在一些实施例中,参数确定模块220可以确定图像数据中每组阵元接收的在时间上相邻的至少两个图像数据段。
图像数据段可以是每幅图像帧对应的图像数据的一部分。
在一些实施例中,处理设备可以基于扫查探头发射的扫查信号对应的图像数据生成一幅图像帧,再基于多幅图像帧生成图像。示例性地,在扫查探头的全孔径发射模式下,每幅图像帧可以 基于1次扫查信号对应的图像数据获取,每幅图像可以基于40幅图像帧获取。例如,在全孔径发射模式下,基于第1次扫查信号、第2次扫查信号、…第40次扫查信号,可以分别生成第1幅图像帧、第2幅图像帧、…第40幅图像帧,进一步地,基于每幅图像帧对应的扫查信号的发射顺序,复合第1幅图像帧、第2幅图像帧、…第40幅图像帧,可以获取图像。
如前所述,每组阵元对应的每组图像数据可以对应一个图像区域,进一步地,基于多个阵元组对应的多个图像区域,参数确定模块220可以将每幅图像帧对应的图像数据划分为多个图像数据段。如图6所示,第1幅图像帧对应的图像数据可以基于第1图像区域、第2图像区域、…第16图像区域,分别划分为对应的第1-1个图像数据段(即第1帧图像的第一个图像数据段)、第1-2个图像数据段(即第1帧图像的第二个图像数据段)、….第1-16个图像数据段(即第1帧图像的第十六个图像数据段);…第40幅图像帧对应的图像数据可以基于第1图像区域、第2图像区域、…第16图像区域,分别划分为对应的第40-1个图像数据段(即第40帧图像的第一个图像数据段)、第40-2个图像数据段(即第40帧图像的第二个图像数据段)、….第40-16个图像数据段(即第40帧图像的第十六个图像数据段)。
可以理解,每个图像数据段从空间上可以对应为每幅图像帧对应的图像数据的一部分,从时间上可以对应为每个图像数据组的一部分。
在一些实施例中,可以先基于空间关系复合多个图像数据段,获取多幅图像帧,再基于时间顺序,获取检测区域的图像数据。如图6所示,可以基于第1幅图像帧对应16个图像区域的图像数据,获取第1幅图像帧对应的检测区域的图像数据,类似地,可以基于第40幅图像帧对应的16个图像区域的图像数据,获取第40幅图像帧对应的整个检测区域的图像数据;进一步地,可以基于40幅图像帧(每个图像帧对应整个检测区域的图像数据),可以获取整个检测区域的图像数据。
在一些实施例中,也可以先基于时间关系复合多帧图像数据段,获取多组图像数据,再基于空间关系获取检测区域的图像数据。如图6所示,可以基于第1图像区域对应的40幅图像帧中每个图像帧的第1个图像数据段,获取第1图像区域对应的第1组图像数据,类似地,可以基于第16图像区域对应的40幅图像帧中每个图像帧的第16个图像数据段,获取第16图像区域对应的第16组图像数据;进一步地,可以基于16个图像区域的16组图像数据,获取整个检测区域的图像数据。
在一些实施例中,每组阵元接收的在时间上相邻的至少两个图像数据段可以是每组阵元接收的发射扫查信号的顺序相邻且焦点位置相同的至少两个图像数据段。例如,每组阵元接收的在时间上相邻且焦点位置相同的至少两个图像数据段可以是发散波发射模式下同一发射组内的扫查信号对应的连续两个图像数据段,如,发射焦点为C时的第1组阵元接收的第1个图像数据段
Figure PCTCN2021137949-appb-000001
和第2个图像数据段
Figure PCTCN2021137949-appb-000002
又例如,每组阵元接收的在时间上相邻且焦点位置相同的至少两个图像数据段可以是平面波发射模式下连续的多个图像数据段,如第1组阵元接收的第2个图像数据段
Figure PCTCN2021137949-appb-000003
第3个图像数据段
Figure PCTCN2021137949-appb-000004
和第4个图像数据段
Figure PCTCN2021137949-appb-000005
在一些实施例中,参数确定模块220可以基于在时间上相邻的至少两个图像数据段,确定对应于每组阵元的至少一个检测点的相位变化速率。
如前所述,每组图像数据可以对应检测区域的图像的一部分(即一个图像区域),则每个图像数据段可以包括至少一个检测点的图像数据。如图5a所示,第1组阵元对应的图像数据段(或图像数据组)可以包括CK和CE范围内的检测点的图像数据。
在一些实施例中,参数确定模块220可以基于在时间上相邻的两个图像数据段,通过公式(1)确定每组阵元的至少一个检测点的一个相位变化速率:
Figure PCTCN2021137949-appb-000006
其中,
Figure PCTCN2021137949-appb-000007
表示第k组阵元接收的至少一个检测点的相位变化速率,
Figure PCTCN2021137949-appb-000008
Figure PCTCN2021137949-appb-000009
分别表示第k组阵元基于相同焦点对应的扫查信号接收的第i+1帧和第i个图像数据段,
Figure PCTCN2021137949-appb-000010
Figure PCTCN2021137949-appb-000011
分别表示第k组阵元接收的第i+1帧和第i个图像数据段对应的扫查信号的发射时间。
例如,第1组阵元基于相同焦点C对应的扫查信号接收的第2帧和第1个图像数据段分别为
Figure PCTCN2021137949-appb-000012
Figure PCTCN2021137949-appb-000013
则在第1组阵元接收的第2帧和第1个图像数据段对应的扫查信号的发射时间间隔期间,第1组阵元接收的至少一个检测点的相位变化速率为
Figure PCTCN2021137949-appb-000014
类似地,参数确定模块220可以基于在时间上相邻的多个图像数据段,确定每组阵元的至少一个检测点的多个相位变化速率。具体地,参数确定模块220可以先基于时间上相邻的多个图像 数据段中任意相邻的两个图像数据段,通过公式(1)获取对应的多个相位变化速率,再基于多个相位变化速率,获取最终的一个相位变化速率。在一些实施例中,参数确定模块220可以计算多个相位变化速率的平均值、加权平均值和方差等,以获取最终的一个相位变化速率。
例如,对于第1组阵元基于相同发射焦点的扫查信号,接收的第2个图像数据段
Figure PCTCN2021137949-appb-000015
第3个图像数据段
Figure PCTCN2021137949-appb-000016
和第4个图像数据段
Figure PCTCN2021137949-appb-000017
参数确定模块220可以基于第2个图像数据段
Figure PCTCN2021137949-appb-000018
和第3个图像数据段
Figure PCTCN2021137949-appb-000019
获取相位变化速率
Figure PCTCN2021137949-appb-000020
基于第3个图像数据段
Figure PCTCN2021137949-appb-000021
和第4个图像数据段
Figure PCTCN2021137949-appb-000022
获取相位变化速率
Figure PCTCN2021137949-appb-000023
再计算相位变化速率
Figure PCTCN2021137949-appb-000024
和相位变化速率
Figure PCTCN2021137949-appb-000025
的平均值,获取第1组阵元的至少一个检测点的相位变化速率
Figure PCTCN2021137949-appb-000026
操作330,可以基于与相位变化有关的参数,以及至少一个检测点与至少一个发射点、多个接收点的位置关系,确定至少一个检测点的第一流速。
在一些实施例中,操作330可以由第一流速确定模块230执行。
发射点可以是向检测点发射扫查信号的阵元组中的任意阵元。如图5a所示,向检测点D发射扫查信号的第3组阵元中的第2个阵元可以是检测点D对应的发射点3-2。
接收点可以是从检测点接收反射信号的阵元组中的任意阵元。如图5a所示,从检测点D接收扫查信号的第1组阵元中的第5个阵元可以检测点D对应的接收点1-5。
第一流速可以是基于反射信号的相位变化确定的检测点的流动速度。
在一些实施例中,第一流速确定模块230可以基于与相位变化有关的参数,以及至少一个检测点与至少一个发射点、多个接收点的位置关系,针对多组阵元分别进行特征矩阵的计算,并将多组阵元的计算结果进行整合,以获得至少一个检测点的第一流速。
关于获取第一流速的详细描述可以参见图7及其相关描述,在此不再赘述。
图7是根据本说明书的一些实施例所示的基于与相位变化有关的参数,以及至少一个检测点与至少一个发射点、多个接收点的位置关系,确定至少一个检测点的第一流速方法的示例性流程图。在一些实施例中,过程700可以由扫查设备110和/或处理设备120来执行。在一些实施例中,过程700可以以程序或指令的形式存储在存储装置(如存储设备150)中,当流速检测系统100(如处理设备120)执行该程序或指令时,可以实现过程700。在一些实施例中,过程700可以由第一流速确定模块230执行。如图7所示,过程700可以包括以下操作中的一个或多个。
操作710,可以基于至少一个检测点与至少一个发射点、多个接收点的位置关系,确定对应于每组阵元的合空间位移向量。
在一些实施例中,每个发射点可以向至少一个检测点发射扫查信号。具体地,每个发射点可以向检测区域中相应的发射方向上的检测点发射扫查信号。
在一些实施例中,每个发射点的发射方向可以基于发射点发射扫查信号时对应的焦点位置确定。例如,发射方向可以由焦点位置指向发射点位置。如图5a所示,发散波模式下发射点3-2(第3组阵元中的第2个阵元)的发射方向可以由发射焦点C指向检测点D(即
Figure PCTCN2021137949-appb-000027
方向),发射点3-2可以向检测区域中
Figure PCTCN2021137949-appb-000028
方向上的检测点(即线段FG上的检测点)发射扫查信号。如图5b所示,平面波模式下反射点的发射方向可以是垂直方向。
在一些实施例中,每个接收点可以从至少一个检测点接收基于至少一个检测点的扫查信号获取的反射信号。具体地,每个接收点可以从检测区域中对应的接收方向上的检测点接收反射信号。在一些实施例中,每个接收点的接收方向可以基于接收点位置和检测点位置确定。例如,接收方向可以由接收点位置指向检测点位置。如图5a所示,接收点1-5(第1组阵元中的第5个阵元)的接收方向可以由接收点1-5的位置J指向检测点D(即
Figure PCTCN2021137949-appb-000029
方向),接收点1-5可以从检测区域中
Figure PCTCN2021137949-appb-000030
方向上的检测点(即线段HI上的检测点)接收反射信号。
对应于每组阵元的合空间位移向量可以是每组阵元检测到的至少一个检测点的单位流速,可以表征每组阵元检测到的至少一个检测点的流速方向。在一些实施例中,对应于每组阵元的合空间位移向量可以是每组阵元中的所有阵元(即接收点)对应的空间位移向量的合向量。
在一些实施例中,第一流速确定模块230可以基于至少一个检测点、至少一个发射点和/或多个接收点中每个接收点的位置关系,确定对应于每个接收点的空间位移向量。
接收点对应于某个检测点的空间位移向量可以是该检测点在该接收点的接收方向的单位向量和该检测点在发射方向的单位向量的合向量。如图5a所示,接收点1-5对应于检测点D的空间位移向量可以是检测点D在接收方向
Figure PCTCN2021137949-appb-000031
上的单位向量和在发射方向
Figure PCTCN2021137949-appb-000032
上的单位向量的合向量
Figure PCTCN2021137949-appb-000033
在一些实施例中,第一流速确定模块230可以基于公式(2)确定每个接收点对应于一个检测点的空间位移向量:
Figure PCTCN2021137949-appb-000034
其中,
Figure PCTCN2021137949-appb-000035
是第k组接收阵元中第m个阵元(即第m个接收点)对应于某个检测点的空间位移向量,x是检测点的位置,x T是发射焦点的位置,
Figure PCTCN2021137949-appb-000036
是第k组接收阵元中第m个阵元的位置。
进一步地,在一些实施例中,第一流速确定模块230可以基于对应于每个接收点的空间位移向量,以及对应于每个接收点的权重,确定对应于每组阵元的合空间位移向量。
每个接收点的权重可以是表征每个接收点对于所述至少一个检测点的重要程度。在一些实施例中,对于不同检测点,每个接收点的权重可以不同。
在一些实施例中,每个接收点的权重可以基于每个接收点与至少一个检测点的距离确定。
在一些实施例中,每个接收点的权重可以与每个接收点与至少一个检测点的距离正相关。示例性地,接收点距离某一检测点的距离越近,则接收点相对于该检测点的权重越小。如图5a所示,接收点1-1、接收点1-5、…接收点2-5距离检测点D的距离越来越近,则接收点1-1、接收点1-5、…接收点2-5对应的权重越来越小。
在一些实施例中,第一流速确定模块230可以基于每个接收点和检测点之间的距离确定每个接收点的权重。示例性地,第一流速确定模块230可以将每个接收点和检测点之间距离与所有接收点和检测点之间距离之和的比值作为每个接收点的权重。例如,接收点1-1、接收点1-2、…接收点16-8距离检测点D的距离分别为20mm、25mm、…..50mm,距离的和为3840mm,则接收点1-1、接收点1-2、…接收点16-8的权重分别为20/3840=0.0052,25/3840=0.0065,…,50/3840=0.0130。
本说明书的一些实施例设置每个接收点的权重与每个接收点与至少一个检测点的距离正相关,可以提高图像的分辨率。
在一些实施例中,每个接收点的权重可以与每个接收点与所述至少一个检测点的距离负相关。示例性地,接收点距离某一检测点的距离越近,则接收点相对于该检测点的权重越大。如图5a所示,接收点1-1、接收点1-5、…接收点2-5距离检测点D的距离越来越近,则接收点1-1、接收点1-5、…接收点2-5对应的权重越来越大。
在一些实施例中,第一流速确定模块230可以基于每个接收点和检测点之间的距离的倒数值确定每个接收点的权重。示例性地,第一流速确定模块230可以将每个接收点和检测点之间距离的倒数值与所有接收点和检测点之间距离的倒数值之和的比值作为每个接收点的权重。例如,接收点1-1、接收点1-2、…接收点16-8距离检测点D的距离分别为20mm、25mm、…..50mm,对应的距离的倒数分别为0.05、0.04、…0.02,距离的倒数的和为3.2,则接收点1-1、接收点1-2、…接收点16-8的权重分别为0.05/3.2=0.015625,0.04/3.2=0.0125,…,0.02/3.2=0.00625。
本说明书的一些实施例设置每个接收点的权重与每个接收点与至少一个检测点的距离负相关,可以减少图像的伪影。
在一些实施例中,第一流速确定模块230可以利用每组阵元中每个接收点的权重,对每组阵元中每个接收点的空间位移向量进行加权求和,以获得对应于每组阵元的合空间位移向量。
在一些实施例中,第一流速确定模块230可以通过公式(3)确定对应于每组阵元的合空间位移向量:
Figure PCTCN2021137949-appb-000037
其中,p k是第k组阵元的合空间位移向量,
Figure PCTCN2021137949-appb-000038
是第k组阵元第m个接收点的权重,
Figure PCTCN2021137949-appb-000039
是第k组阵元第m个接收点的空间位移向量。
继续上述示例,第一流速确定模块230可以基于第1阵元中的接收点1-1、接收点1-2、…….、接收点1-16对应于检测点D的空间位移向量
Figure PCTCN2021137949-appb-000040
和权重
Figure PCTCN2021137949-appb-000041
确定第1组阵元对应于检测点D的合空间位移向量p 1。类似地,第一流速确定模块230可以确定第2组阵元、第3组阵元、…第16组阵元分别对应于检测点D的合空间位移向量:p 2、p 3、…p 16
操作720,可以基于对应于每组阵元的合空间位移向量,确定对应于每组阵元的第一特征矩阵。
每组阵元的第一特征矩阵可以是基于每组阵元接收的相邻两次扫查信号对应的反射信号的相位变化在反射信号的波形图坐标系中横轴和纵轴方向上的分量获取的矩阵。
在一些实施例中,第一流速获取模块230可以基于每组阵元对应的合空间位移向量确定单 位相位变化速率分别在X轴和Z轴的分量。其中,X轴和Z轴可以分别平行于检测区域的水平方向和垂直方向。
具体地,第一流速获取模块230可以基于公式(4)获取每组阵元的第一特矩阵:
Figure PCTCN2021137949-appb-000042
其中,
Figure PCTCN2021137949-appb-000043
Figure PCTCN2021137949-appb-000044
可以分别表示第k组阵元对应的单位相位变化速率在X轴的分量和Z轴的分量,ω 0是发射脉冲的角频率,c为扫查信号的速度,p kx T和p kz T分别为第k组阵元的合空间位移向量的转置p T在X轴的分量和Z轴的分量,可以基于至少一个检测点与至少一个发射点、多个接收点的位置关系确定。
操作730,可以基于对应于每组阵元的至少一个检测点的相位变化速率,和/或对应于每组阵元的第一特征矩阵,确定至少一个检测点的第一流速。
在一些实施例中,至少一个检测点的第一流速和相位变化速率的关系可以由公式(5)表示:
Figure PCTCN2021137949-appb-000045
其中,
Figure PCTCN2021137949-appb-000046
是相位变化速率,v为至少一个检测点的第一流速,p T为合空间位移向量的转置。
在一些实施例中,第一流速v可以分解为在横轴上的分量v x和纵轴上的分量v z,并基于公式(6)将(5)的右端分解为第一矩阵和第一流速在横轴的乘积:
Figure PCTCN2021137949-appb-000047
为了便于计算,可以令
Figure PCTCN2021137949-appb-000048
从而将公式(6)简化为公式(7):
a kv=b k           (7)
在一些实施例中,第一流速确定模块230可以基于对应于每组阵元的至少一个检测点的相位变化速率,和/或对应于每组阵元的第一特征矩阵,确定对应于每组阵元的第一辅助计算矩阵。
在一些实施例中,第一辅助计算矩阵可以是每组阵元的第一特征矩阵的转置矩阵和相位变化速率的乘积:a k Tb k
在一些实施例中,第一流速确定模块230可以基于对应于所述每组阵元的第一特征矩阵,确定对应于所述每组阵元的第二辅助计算矩阵。在一些实施例中,第二辅助矩阵可以是每组阵元的第一特征矩阵的转置和第一特征矩阵的乘积:a k Ta k
在一些实施例中,第一流速确定模块230可以将对应于所述每组阵元的第一辅助计算矩阵进行累加,以获得第三辅助计算矩阵。具体地,第三辅助矩阵可以基于公式(8)确定:
A TB=∑ ka k Tb k             (8)
在一些实施例中,第一流速确定模块230可以将对应于所述每组阵元的第二辅助计算矩阵进行累加,获得第四辅助计算矩阵。具体地,第四辅助矩阵可以基于公式(9)确定:
A TA=∑ ka k Ta k              (9)
在一些实施例中,第一流速确定模块230可以基于第三辅助计算矩阵和第四辅助计算矩阵,确定至少一个检测点的第一流速。
在一些实施例中,第一流速可以基于公式(10)确定:
v=(A TA) -1(A Tb)           (10)
在一些实施例中,第一流速确定模块230可以采用GPU并行计算的方式确定至少一个检测点的第一流速。
具体地,GPU可以并行计算各组阵元对应的相位变化速率、合空间位移向量、第一特征矩阵和第一辅助计算矩阵等,从而提高计算效率。
图8是根据本说明书的一些实施例所示的确定至少一个检测点的第二流速方法的示例性流程图。在一些实施例中,过程800可以由扫查设备110和/或处理设备120来执行。在一些实施例中,过程800可以以程序或指令的形式存储在存储装置(如存储设备150)中,当流速检测系统100(如处理设备120)执行该程序或指令时,可以实现过程800。在一些实施例中,过程800可以由第二流速确定模块240执行。如图8所示,过程800可以包括以下操作中的一个或多个:
操作810,可以基于图像数据,确定至少一个检测点的时间强度梯度和/或空间强度梯度。
光流场(Optical Flow)可以是运动场在二维平面上的投影图像。可以理解,运动场可以用于描述运动,光流场可以体现投影图像序列中不同投影图像的灰度分布,从而将三维空间的运动场转移到二维平面上,因此,光流场在理想状态下,可以对应于运动场。
光流可以是检测点在投影图像上对应的投影点的瞬时运动速度。在一些实施例中,光流可 以用光流场中像素点的灰度值的变化趋势表示。光流场中箭头的长度和指向可以分别表征各点光流的大小和方向。
在一些实施例中,第二流速确定模块240可以基于图像数据获取光流场。具体地,第二流速确定模块240可以基于多幅连续图像帧中所有像素点灰度值的变化确定所有光流,从而获取光流场。
在一些实施例中,第二流速确定模块240可以基于光流场获取时间强度梯度和/或空间强度梯度。
时间强度梯度可以是光流场中像素点的灰度值基于时间变化的速率,可以用投影图像中像素点相对于时间(t)方向的偏导数表示。
在一些实施例中,第二流速确定模块240可以基于公式(11)确定至少一个检测点的时间强度梯度:
Figure PCTCN2021137949-appb-000049
其中,
Figure PCTCN2021137949-appb-000050
是至少一个检测点的时间强度梯度,I是光流场,t是组成光流场的多个光流场帧对应帧间间隔时间。
空间强度梯度可以是光流场中像素点的灰度值基于位置变化的梯度,可以用投影图像中像素点沿X轴和Z轴的偏导数表示。在一些实施例中,第二流速确定模块240可以基于公式(12)确定至少一个检测点的空间强度梯度:
Figure PCTCN2021137949-appb-000051
其中,
Figure PCTCN2021137949-appb-000052
是至少一个检测点的空间强度梯度,I是光流场,x是光流场在横轴上的单位距离,z是光流场在纵轴上的单位距离。
操作820,可以基于至少一个检测点的时间强度梯度和/或空间强度梯度,确定至少一个检测点的第二流速。
第二流速可以是基于图像的像素点强度的时间变化和/或空间变化确定的检测点的流动速度,可以用光流的瞬时速度表示。
具体地,假设检测点对应的像素点I(x,z,t)在相邻两幅图像帧上用了dt时间移动了(dx,dz)的距离,基于同一像素点在运动前后的灰度值不变的假设条件,可以获取基本约束方程(13):
I(x,z,t)=I(x+dx,z+dz,t+dt)             (13)
进一步地,可以基于泰勒公式将(13)右端进行展开,获取公式(14):
Figure PCTCN2021137949-appb-000053
其中,ε表示二阶无穷小项,可以忽略不计。
更近一步地,第二流速确定模块240可以将(14)两端同时减去I(x,z,t),然后同时除以dt,获取公式(15):
Figure PCTCN2021137949-appb-000054
其中,
Figure PCTCN2021137949-appb-000055
Figure PCTCN2021137949-appb-000056
分别为第二流速v沿X轴的速度矢量v x和沿Z轴的速度矢量v z
Figure PCTCN2021137949-appb-000057
Figure PCTCN2021137949-appb-000058
分别为空间强度梯度
Figure PCTCN2021137949-appb-000059
沿X轴和Z轴的分量,
Figure PCTCN2021137949-appb-000060
为时间强度梯度
Figure PCTCN2021137949-appb-000061
在一些实施例中,至少一个检测点的第二流速、空间强度梯度和时间强度梯度的关系可以用公式(16)表示:
Figure PCTCN2021137949-appb-000062
在一些实施例中,可以将公式(16)转化为公式(17):
Figure PCTCN2021137949-appb-000063
其中,i表示第i幅图像帧,
Figure PCTCN2021137949-appb-000064
Figure PCTCN2021137949-appb-000065
分别表示第i幅图像帧的检测点对应的光流的位置变化率沿X轴和Z轴的分量,
Figure PCTCN2021137949-appb-000066
表示检测点对应的光流的时间变化率。
在一些实施例中,可以令
Figure PCTCN2021137949-appb-000067
将公式(17)转化为公式(18):
Mv=N             (18)
其中,M可以表示至少连续多幅图像帧对应的空间强度梯度矩阵,v可以表示检测点的第二流速,N可以表示至少连续多幅图像帧对应的时间强度梯度矩阵。
进一步地,第二流速确定模块240可以基于公式(18)两端同时乘以矩阵M的转置M T,获取M TMv=M TN。
在一些实施例中,第二流速确定模块240可以仅基于数据的实部进行简化和/或计算:real(M TMv)=real(M TN)。
更进一步地,第二流速确定模块240可以基于公式(18)获取检测点的第二流速(19):
v=(M TM) -1(M TN)            (19)
其中,
Figure PCTCN2021137949-appb-000068
本说明书的一些实施例基于图像数据,使用光流法计算至少一个检测点的第二流速,可以将三维运动场中的检测点的流速转换到二维运动场中计算获取。
图9是根据本说明书的一些实施例所示的流速校正方法的示例性流程图。在一些实施例中,过程900可以由扫查设备110和/或处理设备120来执行。在一些实施例中,过程900可以以程序或指令的形式存储在存储装置(如存储设备150)中,当流速检测系统100(如处理设备120)执行该程序或指令时,可以实现过程900。在一些实施例中,过程900可以由流速校正模块250执行。如图9所示,过程900可以包括以下操作中的一个或多个。
如前所述,第一流速可以是基于检测点反射信号的相位变化确定的检测点的流动速度;第二流速可以是基于图像的像素点强度的时间变化和空间变化确定的检测点的流动速度。其中,第一流速和第二流速使用相同的图像数据,分别基于系统的时间分辨能力(相位变化)和空间分辨能力(像素点强度)获取,从而可以相互进行验证和/或校正。
操作910,可以确定至少一个检测点的第一流速和第二流速的差异。
至少一个检测点的第一流速和第二流速的差异可以表征至少一个检测点的第一流速和第二流速之间速率和方向的差距大小。
在一些实施例中,流速校正模块250可以基于至少一个检测点的第一流速速率和第二流速速率的差值、差值百分比等获取第一流速和第二流速的速率差异。例如,检测点Q的第一流速速率为20,第二流速速率为22,则第一流速和第二流速的速率差异可以是22-20=2。又例如,检测点D的第一流速速率为20,第二流速速率为30,则第一流速和第二流速的速率差异可以是(30-20)/20×100%=50%。
在一些实施例中,流速校正模块250可以基于至少一个检测点的第一流速和第二流速之间夹角的大小获取第一流速和第二流速的方向差异。例如,检测点Q的第一流速和X轴的夹角为30°,第二流速和X轴的夹角为40°,则第一流速和第二流速的方向差异可以是40-30=10。
在一些实施例中,流速校正模块250可以基于至少一个检测点的第一流速和第二流速的向量差的模的大小获取第一流速和第二流速的差异。例如,检测点Q的第一流速为
Figure PCTCN2021137949-appb-000069
第二流速为
Figure PCTCN2021137949-appb-000070
则第一流速和第二流速的速率差异可以是
Figure PCTCN2021137949-appb-000071
操作920,可以响应于差异不大于阈值,基于至少一个检测点的第一流速和第二流速,确定至少一个检测点的目标流速。
阈值可以是用于评价至少一个检测点的第一流速和第二流速的差异大小的值。在一些实施例中,阈值可以预先人工设定。
在一些实施例中,阈值可以包括第一阈值和第二阈值。第一阈值可以是评价至少一个检测点的第一流速速率和第二流速速率的差异大小的值。第二阈值可以是评价至少一个检测点的第一流速和第二流速的方向差异大小的值。继续上述示例,第一阈值为5,第二阈值为20°,检测点Q的第一流速速率和第二流速速率的差异为2,检测点Q的第一流速和第二流速的方向差异为10,则可以确定检测点Q的目标流速为检测点Q的第一流速速率20和第二流速速率22的平均值21,方向为和X轴的夹角为35°。
在一些实施例中,阈值还可以包括第三阈值。第三阈值可以是评价至少一个检测点的第一流速和第二流速的速率和方向差异大小的值。继续上述示例,第三阈值为6,检测点Q的第一流速
Figure PCTCN2021137949-appb-000072
和第二流速
Figure PCTCN2021137949-appb-000073
的差异为
Figure PCTCN2021137949-appb-000074
则可以确定检测点Q的目标流速的速率为第一流速和第二流速速率的平均值
Figure PCTCN2021137949-appb-000075
方向为第一流速和第二流速的合向量
Figure PCTCN2021137949-appb-000076
的方向。
操作930,可以响应于差异大于阈值,确定与至少一个检测点相邻的至少一个相邻检测点 的目标流速,并对至少一个相邻检测点的目标流速进行插值,以得到至少一个检测点的目标流速。
可以理解,当至少一个检测点的第一流速和第二流速的差异较大时,该至少一个检测点对应的图像数据可能存在误差,流速校正模块可以基于其他相邻检测点的目标流速获取该至少一个检测点的目标流速。
继续上述示例,第一阈值为10%,检测点D的第一流速和第二流速速率的差异为50%,则可以通过插值获取检测点D的目标流速。
在一些实施例中,插值可以包括但不限于最邻近插值、二次插值、三次插值等自适应插值算法中的至少一种。在一些实施例中,流速校正模块250可以基于不同的插值算法选取与至少一个检测点相邻的至少一个相邻检测点。
示例性地,流速校正模块250可以基于最邻近插值算法将距离检测点最近的相邻检测点的目标流速作为该检测点的目标流速。
又一示例性地,流速校正模块250可以基于二次插值算法,选取检测点横向上最相邻的左侧检测点和右侧检测点,以及纵向上最邻近的上侧检测点和下侧检测点作为该检测点的邻近检测点。进一步地,获取最邻近的左侧检测点和右侧检测点的目标流速在横向上分量的平均值,作为该检测点的横向目标流速;获取最邻近的上侧检测点和下侧检测点的目标流速在纵向上分量的平均值,作为该检测点的纵向目标流速;基于该检测点的横向目标流速和纵向目标流速,获取该检测点的目标流速。
本说明书实施例可能带来的有益效果包括但不限于:(1)将阵元进行分组,并利用发射点(发射焦点)、接收点(阵元)和/或检测点之间的位置关系,结合相位变化确定每一个检测点的二维流速,相比多角度发射的模式,一次发射便可以得到多个角度下反射信号的相位变化情况,可以检测出垂直于发射方向的目标物体流速,提高了系统对数据的利用率,有利于提高系统帧频与速度评估的准确性;(2)使用全孔径发射模式,可以提高成像效率,同时可以采用发散波模式,使得发射的扫查信号可以指向同一焦点位置,不仅可以提高信噪比,还可以增加系统的帧频,从而提高系统的时间分辨能力;(3)基于图像数据,使用光流法计算至少一个检测点的第二流速,可以将三维运动场中的检测点的流速转换到二维运动场中计算获取;(4)分别基于系统的时间分辨能力(相位变化)和空间分辨能力(像素点强度)获取第一流速和第二流速,从而可以相互进行验证和/或校正;(5)利用了像素波束合成方法,基于阵元分组的模式使用GPU并行计算,可以提高运算效率,降低硬件及时间成本。需要说明的是,不同实施例可能产生的有益效果不同,在不同的实施例里,可能产生的有益效果可以是以上任意一种或几种的组合,也可以是其他任何可能获得的有益效果。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述详细披露仅仅作为示例,而并不构成对本说明书的限定。虽然此处并没有明确说明,本领域技术人员可能会对本说明书进行各种修改、改进和修正。该类修改、改进和修正在本说明书中被建议,所以该类修改、改进、修正仍属于本说明书示范实施例的精神和范围。
同时,本说明书使用了特定词语来描述本说明书的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本说明书至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一个替代性实施例”并不一定是指同一实施例。此外,本说明书的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。
此外,本领域技术人员可以理解,本说明书的各方面可以通过若干具有可专利性的种类或情况进行说明和描述,包括任何新的和有用的工序、机器、产品或物质的组合,或对他们的任何新的和有用的改进。相应地,本说明书的各个方面可以完全由硬件执行、可以完全由软件(包括固件、常驻软件、微码等)执行、也可以由硬件和软件组合执行。以上硬件或软件均可被称为“数据块”、“模块”、“引擎”、“单元”、“组件”或“系统”。此外,本说明书的各方面可能表现为位于一个或多个计算机可读介质中的计算机产品,该产品包括计算机可读程序编码。
计算机存储介质可能包含一个内含有计算机程序编码的传播数据信号,例如在基带上或作为载波的一部分。该传播信号可能有多种表现形式,包括电磁形式、光形式等,或合适的组合形式。计算机存储介质可以是除计算机可读存储介质之外的任何计算机可读介质,该介质可以通过连接至一个指令执行系统、装置或设备以实现通讯、传播或传输供使用的程序。位于计算机存储介质上的程序编码可以通过任何合适的介质进行传播,包括无线电、电缆、光纤电缆、RF、或类似介质,或任何上述介质的组合。
本说明书各部分操作所需的计算机程序编码可以用任意一种或多种程序语言编写,包括面 向对象编程语言如Java、Scala、Smalltalk、Eiffel、JADE、Emerald、C++、C#、VB.NET、Python等,常规程序化编程语言如C语言、Visual Basic、Fortran2003、Perl、COBOL2002、PHP、ABAP,动态编程语言如Python、Ruby和Groovy,或其他编程语言等。该程序编码可以完全在用户计算机上运行、或作为独立的软件包在用户计算机上运行、或部分在用户计算机上运行部分在远程计算机运行、或完全在远程计算机或处理设备上运行。在后种情况下,远程计算机可以通过任何网络形式与用户计算机连接,比如局域网(LAN)或广域网(WAN),或连接至外部计算机(例如通过因特网),或在云计算环境中,或作为服务使用如软件即服务(SaaS)。
此外,除非权利要求中明确说明,本说明书所述处理元素和序列的顺序、数字字母的使用、或其他名称的使用,并非用于限定本说明书流程和方法的顺序。尽管上述披露中通过各种示例讨论了一些目前认为有用的发明实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本说明书实施例实质和范围的修正和等价组合。例如,虽然以上所描述的系统组件可以通过硬件设备实现,但是也可以只通过软件的解决方案得以实现,如在现有的处理设备或移动设备上安装所描述的系统。
同理,应当注意的是,为了简化本说明书披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本说明书实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本说明书对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本说明书一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。
针对本说明书引用的每个专利、专利申请、专利申请公开物和其他材料,如文章、书籍、说明书、出版物、文档等,特此将其全部内容并入本说明书作为参考。与本说明书内容不一致或产生冲突的申请历史文件除外,对本说明书权利要求最广范围有限制的文件(当前或之后附加于本说明书中的)也除外。需要说明的是,如果本说明书附属材料中的描述、定义、和/或术语的使用与本说明书所述内容有不一致或冲突的地方,以本说明书的描述、定义和/或术语的使用为准。
最后,应当理解的是,本说明书中所述实施例仅用以说明本说明书实施例的原则。其他的变形也可能属于本说明书的范围。因此,作为示例而非限制,本说明书实施例的替代配置可视为与本说明书的教导一致。相应地,本说明书的实施例不仅限于本说明书明确介绍和描述的实施例。

Claims (21)

  1. 一种流速检测方法,包括:
    获取图像数据;
    基于所述图像数据确定至少一个检测点与相位变化有关的参数;
    基于所述与相位变化有关的参数,以及所述至少一个检测点与至少一个发射点、多个接收点的位置关系,确定所述至少一个检测点的第一流速。
  2. 如权利要求1所述的方法,所述获取图像数据,包括:
    利用全孔径发射获取所述图像数据。
  3. 如权利要求2所述的方法,所述全孔径发射包括非聚焦波发射模式下的全孔径发射。
  4. 如权利要求1所述的方法,所述方法进一步包括:
    对扫查探头的阵元进行分组,以得到多组阵元,所述多组阵元中的每组阵元包括一个或多个阵元。
  5. 如权利要求4所述的方法,所述扫查探头包括线阵探头、凸阵探头、相控阵探头中的任意一种。
  6. 如权利要求4所述的方法,所述图像数据包括多组图像数据,所述多组图像数据中的每组图像数据与所述多组阵元中的一组阵元对应,所述每组图像数据是基于对应的一组阵元接收的反射信号进行解调及波束合成得到的。
  7. 如权利要求6所述的方法,所述与相位变化有关的参数包括相位变化速率,所述基于所述图像数据确定至少一个检测点与相位变化有关的参数,包括:
    确定所述图像数据中所述每组阵元接收的在时间上相邻的至少两帧图像数据段;
    基于所述在时间上相邻的至少两个图像数据段,确定对应于所述每组阵元的所述至少一个检测点的相位变化速率。
  8. 如权利要求7所述的方法,所述基于所述与相位变化有关的参数,以及所述至少一个检测点与至少一个发射点、多个接收点的位置关系,确定所述至少一个检测点的第一流速,包括:
    基于所述与相位变化有关的参数,以及所述至少一个检测点与至少一个发射点、多个接收点的位置关系,针对所述多组阵元分别进行特征矩阵的计算,并将多组阵元的计算结果进行整合,以获得所述至少一个检测点的第一流速。
  9. 如权利要求7所述的方法,所述基于所述与相位变化有关的参数,以及所述至少一个检测 点与至少一个发射点、多个接收点的位置关系,确定所述至少一个检测点的第一流速,包括:
    基于所述至少一个检测点与至少一个发射点、多个接收点的位置关系,确定对应于所述每组阵元的合空间位移向量;
    基于所述对应于每组阵元的合空间位移向量,确定对应于所述每组阵元的第一特征矩阵;
    基于对应于所述每组阵元的所述至少一个检测点的相位变化速率,以及对应于所述每组阵元的第一特征矩阵,确定所述至少一个检测点的第一流速。
  10. 如权利要求9所述的方法,所述基于对应于所述每组阵元的所述至少一个检测点的相位变化速率,以及对应于所述每组阵元的第一特征矩阵,确定所述至少一个检测点的第一流速,包括:
    基于对应于所述每组阵元的所述至少一个检测点的相位变化速率,以及对应于所述每组阵元的第一特征矩阵,确定对应于所述每组阵元的第一辅助计算矩阵;
    基于对应于所述每组阵元的第一特征矩阵,确定对应于所述每组阵元的第二辅助计算矩阵;
    将对应于所述每组阵元的第一辅助计算矩阵进行累加,获得第三辅助计算矩阵;
    将对应于所述每组阵元的第二辅助计算矩阵进行累加,获得第四辅助计算矩阵;
    基于所述第三辅助计算矩阵和所述第四辅助计算矩阵,确定所述至少一个检测点的第一流速。
  11. 如权利要求9所述的方法,所述基于所述至少一个检测点与至少一个发射点、多个接收点的位置关系,确定对应于所述每组阵元的合空间位移向量,包括:
    基于所述至少一个检测点、所述至少一个发射点、所述多个接收点中每个接收点的位置关系,确定对应于所述每个接收点的空间位移向量;
    基于对应于所述每个接收点的空间位移向量,以及对应于所述每个接收点的权重,确定对应于所述每组阵元的合空间位移向量。
  12. 如权利要求11所述的方法,所述基于对应于所述每个接收点的空间位移向量,以及对应于所述每个接收点的权重,确定对应于所述每组阵元的合空间位移向量,包括:
    利用所述每组阵元中每个接收点的权重,对所述每组阵元中每个接收点的空间位移向量进行加权求和,以获得对应于所述每组阵元的合空间位移向量。
  13. 如权利要求11所述的方法,对应于所述每个接收点的权重,基于所述每个接收点与所述至少一个检测点的距离确定。
  14. 如权利要求1所述的方法,进一步包括:
    基于所述图像数据,确定所述至少一个检测点的时间强度梯度与空间强度梯度;
    基于所述至少一个检测点的时间强度梯度与空间强度梯度,确定所述至少一个检测点的第二流速。
  15. 如权利要求14所述的方法,进一步包括:
    基于所述至少一个检测点的第一流速和第二流速,进行速度校正,以获得所述至少一个检测点的目标流速。
  16. 如权利要求15所述的方法,所述速度校正包括:
    确定所述至少一个检测点的第一流速和第二流速的差异;
    响应于所述差异不大于阈值,基于所述至少一个检测点的第一流速和第二流速,确定所述至少一个检测点的目标流速;
    响应于所述差异大于阈值,确定与所述至少一个检测点相邻的至少一个相邻检测点的目标流速,并对所述至少一个相邻检测点的目标流速进行插值,以得到所述至少一个检测点的目标流速。
  17. 如权利要求1所述的方法,所述图像数据为B模式下扫查得到的数据。
  18. 如权利要求1所述的方法,所述确定所述至少一个检测点的第一流速包括:
    采用GPU并行计算的方式确定所述至少一个检测点的第一流速。
  19. 一种流速检测系统,包括:
    至少一个存储介质,其存储有至少一组指令;以及
    至少一个处理器,被配置为与所述至少一个存储介质通信,其中,当执行所述至少一组指令时,所述至少一个处理器被指示为使所述系统:
    获取图像数据;
    基于所述图像数据确定至少一个检测点与相位变化有关的参数;
    基于所述与相位变化有关的参数,以及所述至少一个检测点与至少一个发射点、多个接收点的位置关系,确定所述至少一个检测点的第一流速。
  20. 一种流速检测系统,包括:
    图像数据获取模块,用于获取图像数据;
    参数确定模块,用于基于所述图像数据确定至少一个检测点与相位变化有关的参数;
    第一流速确定模块,用于基于所述与相位变化有关的参数,以及所述至少一个检测点与至少一个发射点、多个接收点的位置关系,确定所述至少一个检测点的第一流速。
  21. 一种非暂时性计算机可读存储介质,包括至少一组指令,其中,当由计算机设备的至少一个处理器执行时,所述至少一组指令指示所述至少一个处理器:
    获取图像数据;
    基于所述图像数据确定至少一个检测点与相位变化有关的参数;
    基于所述与相位变化有关的参数,以及所述至少一个检测点与至少一个发射点、多个接收点的位置关系,确定所述至少一个检测点的第一流速。
PCT/CN2021/137949 2021-12-14 2021-12-14 一种流速检测方法、系统和存储介质 WO2023108421A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2021/137949 WO2023108421A1 (zh) 2021-12-14 2021-12-14 一种流速检测方法、系统和存储介质
CN202180007818.4A CN114938660A (zh) 2021-12-14 2021-12-14 一种流速检测方法、系统和存储介质
EP21967561.8A EP4424240A1 (en) 2021-12-14 2021-12-14 Flow velocity detection method and system, and storage medium
US17/935,075 US20230186491A1 (en) 2021-12-14 2022-09-23 Methods, systems, and storage mediums for fiow velocity detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/137949 WO2023108421A1 (zh) 2021-12-14 2021-12-14 一种流速检测方法、系统和存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/935,075 Continuation US20230186491A1 (en) 2021-12-14 2022-09-23 Methods, systems, and storage mediums for fiow velocity detection

Publications (1)

Publication Number Publication Date
WO2023108421A1 true WO2023108421A1 (zh) 2023-06-22

Family

ID=82863393

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/137949 WO2023108421A1 (zh) 2021-12-14 2021-12-14 一种流速检测方法、系统和存储介质

Country Status (4)

Country Link
US (1) US20230186491A1 (zh)
EP (1) EP4424240A1 (zh)
CN (1) CN114938660A (zh)
WO (1) WO2023108421A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4265126A (en) * 1979-06-15 1981-05-05 General Electric Company Measurement of true blood velocity by an ultrasound system
CN102697524A (zh) * 2012-05-04 2012-10-03 成都优途科技有限公司 全聚焦超声成像方法及其在血流成像中的运用
CN105615919A (zh) * 2014-11-21 2016-06-01 �田睦 快速二维多普勒速度和方向成像
CN108186050A (zh) * 2018-01-03 2018-06-22 声泰特(成都)科技有限公司 一种基于超声通道数据的多普勒血流速度成像方法和系统
CN111652849A (zh) * 2020-05-08 2020-09-11 武汉联影医疗科技有限公司 血流参数计算结果获取方法、装置、设备和系统

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109414245B (zh) * 2016-09-30 2022-04-08 深圳迈瑞生物医疗电子股份有限公司 超声血流运动谱的显示方法及其超声成像系统
CN110327077B (zh) * 2019-07-09 2022-04-15 深圳开立生物医疗科技股份有限公司 一种血流显示方法、装置及超声设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4265126A (en) * 1979-06-15 1981-05-05 General Electric Company Measurement of true blood velocity by an ultrasound system
CN102697524A (zh) * 2012-05-04 2012-10-03 成都优途科技有限公司 全聚焦超声成像方法及其在血流成像中的运用
CN105615919A (zh) * 2014-11-21 2016-06-01 �田睦 快速二维多普勒速度和方向成像
CN108186050A (zh) * 2018-01-03 2018-06-22 声泰特(成都)科技有限公司 一种基于超声通道数据的多普勒血流速度成像方法和系统
CN111652849A (zh) * 2020-05-08 2020-09-11 武汉联影医疗科技有限公司 血流参数计算结果获取方法、装置、设备和系统

Also Published As

Publication number Publication date
US20230186491A1 (en) 2023-06-15
EP4424240A1 (en) 2024-09-04
CN114938660A (zh) 2022-08-23

Similar Documents

Publication Publication Date Title
JP6722656B2 (ja) ネットワークベース超音波イメージングシステム
JP4795675B2 (ja) 医療用超音波システム
EP3581961A1 (en) Method and apparatus for ultrasound imaging with improved beamforming
CN108882916B (zh) 超声血流的参数显示方法及其超声成像系统
CN106596736B (zh) 一种实时超声相控阵全聚焦成像方法
CN105530870B (zh) 一种超声成像方法和系统
US20170333004A1 (en) Ultrasound Diagnostic Device and Elasticity Evaluation Method
KR100954988B1 (ko) 초음파 영상을 형성하는 초음파 시스템 및 방법
CN103251429A (zh) 超声波成像装置
CN105832366B (zh) 一种用于波束合成过程中的延时实时计算方法
WO2018051578A1 (ja) 超音波診断装置及び超音波診断装置の制御方法
US20160338674A1 (en) Acoustic wave processing device, signal processing method for acoustic wave processing device, and program
US8348848B1 (en) Methods and apparatus for ultrasound imaging
US20170168148A1 (en) Synchronized phased array data acquisition from multiple acoustic windows
KR102336172B1 (ko) 초음파 영상 장치 및 그 제어방법
US20110028841A1 (en) Setting a Sagittal View In an Ultrasound System
WO2023108421A1 (zh) 一种流速检测方法、系统和存储介质
JP4278343B2 (ja) 3次元超音波撮像システム
CN108024789B (zh) 容积间病变检测和图像准备
KR101555267B1 (ko) 비집속 초음파를 이용한 빔포밍 방법 및 장치
JP2013244159A (ja) 超音波診断装置及び音速推定方法
WO2018051577A1 (ja) 超音波診断装置及び超音波診断装置の制御方法
US20170032557A1 (en) Ultrasound Focal Zone System and Method
KR101563501B1 (ko) 혈관 부하 측정 방법 및 장치
JP5405804B2 (ja) Bcモード映像を形成する超音波システム及び方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21967561

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2021967561

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2021967561

Country of ref document: EP

Effective date: 20240531

NENP Non-entry into the national phase

Ref country code: DE