CN113536964B - Classification extraction method for ultrasonic video - Google Patents

Classification extraction method for ultrasonic video Download PDF

Info

Publication number
CN113536964B
CN113536964B CN202110710640.1A CN202110710640A CN113536964B CN 113536964 B CN113536964 B CN 113536964B CN 202110710640 A CN202110710640 A CN 202110710640A CN 113536964 B CN113536964 B CN 113536964B
Authority
CN
China
Prior art keywords
ultrasonic
image
state
frame image
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110710640.1A
Other languages
Chinese (zh)
Other versions
CN113536964A (en
Inventor
程栋梁
谢蠡
何年安
刘振
赵文佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Hebin Intelligent Robot Co ltd
Original Assignee
Hefei Hebin Intelligent Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Hebin Intelligent Robot Co ltd filed Critical Hefei Hebin Intelligent Robot Co ltd
Priority to CN202110710640.1A priority Critical patent/CN113536964B/en
Publication of CN113536964A publication Critical patent/CN113536964A/en
Application granted granted Critical
Publication of CN113536964B publication Critical patent/CN113536964B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes

Abstract

The invention discloses a classification extraction method of ultrasonic videos, which defines different state categories of the ultrasonic videos and comprises the following steps: scanning state, measuring state, probe conversion prompting state and information display state; judging each frame of image in the ultrasonic video respectively, and judging the state category of each frame of image; and dividing and storing the ultrasonic video in a classified manner according to the belonging state category of each frame of image. According to the invention, the segmentation of the ultrasonic video is carried out according to different state categories, and the complete original ultrasonic video acquired from the ultrasonic equipment is converted into the ultrasonic short video which is convenient for reading and checking, so that the corresponding ultrasonic short video can be searched through state category and time positioning, the rapid positioning and analysis of the video data under a certain specific state by a worker can be facilitated, and the use experience of the worker and the analysis efficiency of the ultrasonic video are improved.

Description

Classification extraction method for ultrasonic video
Technical Field
The invention relates to the technical field of video processing, in particular to a classification extraction method of ultrasonic videos.
Background
In recent years, due to unreasonable distribution of medical resources, the occurrence rate of medical disputes is higher and higher, and doctors often cannot prove the complaints of patients; especially, the ultrasonic doctor has no retained dynamic video of ultrasonic examination and only static screenshot during the ultrasonic examination, so that the ultrasonic doctor is faced with medical disputes, and often has difficulty in providing complete ultrasonic examination evidence. Therefore, for ultrasonic examination, through completely storing the video of ultrasonic scanning, complete ultrasonic examination evidence can be rapidly provided for an ultrasonic doctor when the ultrasonic doctor encounters medical disputes in the later period, and the medical risk of the doctor is reduced;
the ultrasonic video stored on the existing ultrasonic recorder is a whole video and comprises mixed image data of various modes and various organs, such as a thyroid gland scanning state, a mammary gland Doppler ultrasound state, an ultrasonic instrument dormant state and the like, however, when the stored whole ultrasonic video is read and analyzed, only image data information of a certain state is needed, and the image data information in a certain state really needed is quickly screened out from the whole ultrasonic video, a long time is needed for massive watching and manual screening, so that waste in time resource is caused, and the workload of personnel is increased.
How to improve the film reading efficiency of the ultrasonic video and precisely position the playing position of the ultrasonic video in a certain state is an important problem which needs to be solved in the field of ultrasonic video analysis.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a classification extraction method of ultrasonic videos, which can divide the ultrasonic videos according to different state categories, convert the original ultrasonic videos into a plurality of ultrasonic short videos which are convenient for film reading and investigation, and can locate and search corresponding ultrasonic short videos according to requirements.
In order to achieve the above purpose, the present invention adopts the following technical scheme, including:
a classification extraction method of ultrasonic video comprises the following steps:
s1, defining different state categories of an ultrasonic video, including: scanning state, measuring state, probe conversion prompting state and information display state;
the scanning state refers to: when a part is scanned, scanning pictures of the part in the ultrasonic video;
the measurement state refers to: a picture for confirming and marking the azimuth and the size of the focus in the ultrasonic video;
the probe conversion prompt state refers to: a probe in an ultrasonic video converts a prompt picture;
the user information display state refers to: user information display pictures in ultrasonic videos;
s2, judging each frame of image in the ultrasonic video respectively, and judging the state category of each frame of image;
and S3, dividing and storing the ultrasonic video in a classified manner according to the belonging state category of each frame of image.
In step S2, a certain frame of image in the ultrasonic video is judged, and the specific mode is as follows:
s21, respectively carrying out dynamic and static recognition, character recognition and color recognition on the frame of image;
dynamic and static identification: if the pixel difference between the frame image and the adjacent previous frame image is larger than the set value, the frame image is a dynamic image; if the pixel differences between the frame image and the adjacent previous frame image are smaller than the set value, the frame image is a static image; if the frame is the first frame image, the frame image is a static image;
character recognition: the reading of the text information in the frame image comprises the following steps: the method comprises the steps of scanning text information of a part, text information of a measuring scale, text information of a probe conversion prompt and text information of user information;
s22, judging the belonging state category of the frame image according to the dynamic and static identification and text identification results of the frame image;
if the frame image is a dynamic image and only comprises text information of the scanned part, the frame image is in a scanned state;
if the frame image is a dynamic image and the frame image contains text information of the scanning part and the measuring scale, the frame image is in a measuring state;
if the frame image is a static image and the frame image contains text information of probe conversion prompt, the frame image is in a probe conversion prompt state;
if the frame image is a static image and the frame image contains text information of user information, the frame image is in a user information display state;
before step S2, firstly, carrying out region division on the ultrasonic video to divide identification regions of various text information;
in step S21, the corresponding text information is read from each of the text information recognition areas.
The scanning state comprises the following steps: a B ultrasonic scanning state, a Doppler color ultrasonic scanning state and an ultrasonic contrast scanning state; the measurement state includes: a B ultrasonic measurement state, a Doppler color ultrasonic measurement state and an ultrasonic contrast measurement state;
before step S2, firstly, carrying out region division on the ultrasonic video to divide an ultrasonic image recognition region;
in step S21, color recognition is performed on the ultrasonic image recognition area of the image, that is, the color of the recognition area of the ultrasonic image is determined according to the pixel values of each pixel point in the ultrasonic image recognition area of the image;
in step S22, if the frame image is in a scanning state and the ultrasonic image identification area of the frame image is gray, the frame image is in a B-mode; if the frame image is in a scanning state and the ultrasonic image identification area of the frame image is colored, the frame image is in a Doppler color Doppler ultrasound scanning state; if the frame image is in a scanning state and the ultrasonic image identification area of the frame image is orange, the frame image is in an ultrasonic contrast scanning state;
if the frame image is in a measurement state and the ultrasonic image identification area of the frame image is gray, the frame image is in a B ultrasonic measurement state; if the frame image is in a measurement state and the ultrasonic image identification area of the frame image is in color, the frame image is in a Doppler color Doppler ultrasonic measurement state; if the frame image is in a measurement state and the ultrasonic image identification area of the frame image is orange, the frame image is in an ultrasonic contrast measurement state.
The status categories of the ultrasound video further include: a black screen state and an on-off state;
before step S2, firstly, carrying out region division on the ultrasonic video to divide an identification region of the startup and shutdown text information;
in step S21, the on-off text information is read in the identification area of the on-off text information;
the color recognition is carried out on the whole area of the image, namely the color of the whole area of the image is judged according to the pixel value of each pixel point in the whole area of the image;
in step S22, if the frame image is a still image, and the entire area of the frame image is black and there is no text information in the frame image, the frame image is in a black screen state;
if the frame image is a dynamic image, the whole area of the frame image is colored and the frame image contains the text information of on-off, the frame image is in an on-off state.
In step S3, a plurality of continuous frames of images in the same state are stored as a single video, and the video is marked with a state, and the video is the video in the state separated from the ultrasonic video.
The invention has the advantages that:
(1) According to the invention, the segmentation of the ultrasonic video is carried out according to different state categories, and the complete original ultrasonic video acquired from the ultrasonic equipment is converted into the ultrasonic short video which is convenient for reading and checking, so that the corresponding ultrasonic short video can be checked through the state category and time positioning.
(2) The original complete ultrasonic video acquired from the ultrasonic equipment contains a large amount of information, and usually contains a large amount of diagnostic-independent image data such as dormancy and startup and shutdown of the ultrasonic equipment, probe conversion prompt, user information display and the like. Aiming at staff, the invention can divide a longer original ultrasonic video into shorter videos only comprising one state, provides a fast searching and searching way on the premise of not changing the total length of the whole video, provides a fast ultrasonic video analysis function, helps the staff to quickly locate and analyze video data in a certain specific state, and improves the use experience of the staff and the analysis efficiency of the ultrasonic video.
Drawings
Fig. 1 is a flow chart of a method for classifying and extracting ultrasonic video.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Collecting and storing ultrasonic image data of ultrasonic equipment to obtain an ultrasonic video; the method specifically comprises the steps of collecting and storing videos displayed by a display in ultrasonic equipment to obtain ultrasonic videos, namely, the ultrasonic videos are videos displayed by the display.
As shown in fig. 1, the method for classifying and extracting the ultrasonic video comprises the following steps:
s1, defining different state categories of an ultrasonic video, including: a black screen state, a scanning state, a measuring state, a probe conversion prompting state, an information display state and a startup and shutdown state.
The black screen state refers to: black screen in ultrasound video.
The scanning state refers to: when a part is scanned, a scanning picture of the part in the ultrasonic video is scanned. In this embodiment, the scanning state includes: the ultrasonic imaging system comprises a B ultrasonic scanning state, a Doppler color ultrasonic scanning state and an ultrasonic contrast scanning state.
The measurement state refers to: and confirming and marking the azimuth and the size of the focus in the ultrasonic video. In this embodiment, the measurement state includes: b ultrasonic measuring state, doppler color ultrasonic measuring state and ultrasonic contrast measuring state.
The probe conversion prompt state refers to: the probe in the ultrasonic video converts the prompt picture.
The user information display state refers to: user information in the ultrasound video shows the picture.
The on-off state refers to: and starting up or shutting down the picture in the ultrasonic video.
S2, dividing the ultrasonic video into areas, dividing the identification areas of various text information, and dividing the identification areas of the ultrasonic video.
Wherein, the text information identification area includes: the method comprises the steps of scanning a character information recognition area of a part, a character information recognition area of a measuring scale, a character information recognition area of a probe conversion prompt, a character information recognition area of user information and a character information recognition area of a power-on/off state.
S3, respectively judging the state of each frame of image in the ultrasonic video by using a state machine algorithm; the method is specifically as follows:
s31, respectively carrying out dynamic and static recognition, character recognition and color recognition on the frame image.
Dynamic and static identification: if the pixel difference between the frame image and the adjacent previous frame image is larger than the set value, the frame image is a dynamic image; if the pixel differences between the frame image and the adjacent previous frame image are smaller than the set value, the frame image is a static image; if the frame is the first frame image, the frame image is a still image.
Character recognition: and reading the text information in the identification area of various text information, and respectively reading the text information of the scanning part, the text information of the measuring scale, the text information of the probe conversion prompt, the text information of the user information and the text information of the on-off machine.
In this embodiment, text information in a single frame image of an ultrasonic video is read by using a text recognition library such as chinese, english, latin and the like in an easycr library. For example: when the text information is read in the text information identification area of the scanned part, judging that the text information of the scanned part is contained, wherein the read text information is thyroid, namely the current scanned part is thyroid; when the text information is read in the text information identification area of the measuring scale, the text information containing the measuring scale is judged, and the read text information is 'Measure'; when the character information is read in the character information identification area of the on/off machine, the character information of the on/off machine is judged to be included, and the read character information is Aplio.
Color identification: judging the color of the whole area of the frame image according to the pixel values of all pixel points in the whole area of the frame image; and judging the color of the ultrasonic image recognition area of the frame image according to the pixel value of each pixel point in the ultrasonic image recognition area of the frame image.
In this embodiment, the color_bgr2HSV function in the open-cv library is used to convert the image from the BGR COLOR space to the HSV COLOR space, and by setting different thresholds, the COLORs of each area are determined, for example:
if the HSV value of a pixel point is within a set orange-yellow range, judging an orange-yellow pixel point; if the HSV value of the pixel point is in the set black range, judging the black pixel point; if the HSV value of the pixel point is in the set red range, judging the red pixel point; if the HSV value of the pixel point is in the set blue range, judging the blue pixel point;
if the number of black pixel points in the whole area of the frame image is larger than 20000, judging that the whole area of the frame image is black.
S32, judging the state of the frame image according to the results of dynamic and static identification, character identification and color identification of the frame image.
If the frame image is a static image and the whole area of the frame image is black and no text information exists in the frame image, the frame image is in a black screen state.
If the frame image is a dynamic image, the ultrasonic image identification area of the frame image is gray, and the frame image only contains text information of the scanned part, the frame image is in a B ultrasonic scanning state.
If the frame image is a dynamic image, the ultrasonic image identification area of the frame image is colored, and the frame image only contains text information of the scanned part, the frame image is in a Doppler color ultrasound scanning state.
If the frame image is a dynamic image, the ultrasonic image identification area of the frame image is orange yellow, and the frame image only contains text information of the scanned part, the frame image is in an ultrasonic contrast scanning state.
If the frame image is a dynamic image, the ultrasonic image identification area of the frame image is gray, and the frame image contains text information of the scanning part and the measuring scale, the frame image is in a B ultrasonic measuring state.
If the frame image is a dynamic image, the ultrasonic image identification area of the frame image is colored, and the frame image contains text information of a scanning part and a measuring scale, the frame image is in a Doppler color Doppler ultrasonic measuring state.
If the frame image is a dynamic image, the ultrasonic image identification area of the frame image is orange yellow, and the frame image contains text information of the scanning part and the measuring scale, the frame image is in an ultrasonic contrast measuring state.
If the frame image is a static image and the whole area of the frame image is not black, and the frame image contains text information of the probe conversion prompt, the frame image is in a probe conversion prompt state.
If the frame image is a still image and the whole area of the frame image is not black, and the frame image contains text information of user information, the frame image is in a user information display state.
If the frame image is a dynamic image, the whole area of the frame image is colored, and the frame image contains the text information of on-off, the frame image is in an on-off state.
In this embodiment, if there is an image whose status cannot be determined, the approximate status of the frame image is determined according to the results of dynamic and static recognition, text recognition, and color recognition of the frame image and the status of one or two adjacent frames of the frame image whose status cannot be determined, and classified into the approximate status,
if the two judging requirements of a certain state can be met in three results of dynamic and static recognition, character recognition and color recognition, the approximate state of the frame image is the state;
if one judging requirement of a certain state can be met in three results of dynamic and static recognition, character recognition and color recognition, and the state of a frame image adjacent to the frame image is also the state, the approximate state of the frame image is the state.
S4, according to the state of each frame of image, the ultrasonic video is segmented and classified and stored, and the method is specifically as follows: and (3) independently storing a plurality of continuous frames of images in the same state as a video, and carrying out state marking on the video, wherein the video is the video in the state which is separated from the ultrasonic video.
The above embodiments are merely preferred embodiments of the present invention and are not intended to limit the present invention, and any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (5)

1. The method for classifying and extracting the ultrasonic video is characterized by comprising the following steps of:
s1, defining different state categories of an ultrasonic video, including: scanning state, measuring state, probe conversion prompting state and user information display state;
the scanning state refers to: when a part is scanned, scanning pictures of the part in the ultrasonic video;
the measurement state refers to: a picture for confirming and marking the azimuth and the size of the focus in the ultrasonic video;
the probe conversion prompt state refers to: a probe in an ultrasonic video converts a prompt picture;
the user information display state refers to: user information display pictures in ultrasonic videos;
s2, judging each frame of image in the ultrasonic video respectively, and judging the state category of each frame of image;
s3, dividing and storing the ultrasonic video in a classified manner according to the belonging state category of each frame of image;
in step S2, a certain frame of image in the ultrasonic video is judged, and the specific mode is as follows:
s21, respectively carrying out dynamic and static recognition, character recognition and color recognition on the frame of image;
dynamic and static identification: if the pixel difference between the frame image and the adjacent previous frame image is larger than the set value, the frame image is a dynamic image; if the pixel differences between the frame image and the adjacent previous frame image are smaller than the set value, the frame image is a static image; if the frame is the first frame image, the frame image is a static image;
character recognition: the reading of the text information in the frame image comprises the following steps: the method comprises the steps of scanning text information of a part, text information of a measuring scale, text information of a probe conversion prompt and text information of user information;
s22, judging the belonging state category of the frame image according to the dynamic and static identification and text identification results of the frame image;
if the frame image is a dynamic image and only comprises text information of the scanned part, the frame image is in a scanned state;
if the frame image is a dynamic image and the frame image contains text information of the scanning part and the measuring scale, the frame image is in a measuring state;
if the frame image is a static image and the frame image contains text information of probe conversion prompt, the frame image is in a probe conversion prompt state;
if the frame image is a static image and the frame image contains text information of user information, the frame image is in a user information display state.
2. The method for classifying and extracting ultrasonic video according to claim 1, wherein prior to step S2, the ultrasonic video is divided into regions to divide the recognition regions of various text information;
in step S21, the corresponding text information is read from each of the text information recognition areas.
3. The method for classifying and extracting ultrasound video according to claim 1, wherein the scanning state comprises: a B ultrasonic scanning state, a Doppler color ultrasonic scanning state and an ultrasonic contrast scanning state; the measurement state includes: a B ultrasonic measurement state, a Doppler color ultrasonic measurement state and an ultrasonic contrast measurement state;
before step S2, firstly, carrying out region division on the ultrasonic video to divide an ultrasonic image recognition region;
in step S21, color recognition is performed on the ultrasonic image recognition area of the image, that is, the color of the recognition area of the ultrasonic image is determined according to the pixel values of each pixel point in the ultrasonic image recognition area of the image;
in step S22, if the frame image is in a scanning state and the ultrasonic image identification area of the frame image is gray, the frame image is in a B-mode; if the frame image is in a scanning state and the ultrasonic image identification area of the frame image is colored, the frame image is in a Doppler color Doppler ultrasound scanning state; if the frame image is in a scanning state and the ultrasonic image identification area of the frame image is orange, the frame image is in an ultrasonic contrast scanning state;
if the frame image is in a measurement state and the ultrasonic image identification area of the frame image is gray, the frame image is in a B ultrasonic measurement state; if the frame image is in a measurement state and the ultrasonic image identification area of the frame image is in color, the frame image is in a Doppler color Doppler ultrasonic measurement state; if the frame image is in a measurement state and the ultrasonic image identification area of the frame image is orange, the frame image is in an ultrasonic contrast measurement state.
4. The method for classifying and extracting ultrasound video according to claim 1, wherein the status classification of the ultrasound video further comprises: a black screen state and an on-off state;
before step S2, firstly, carrying out region division on the ultrasonic video to divide an identification region of the startup and shutdown text information;
in step S21, the on-off text information is read in the identification area of the on-off text information;
the color recognition is carried out on the whole area of the image, namely the color of the whole area of the image is judged according to the pixel value of each pixel point in the whole area of the image;
in step S22, if the frame image is a still image, and the entire area of the frame image is black and there is no text information in the frame image, the frame image is in a black screen state;
if the frame image is a dynamic image, the whole area of the frame image is colored and the frame image contains the text information of on-off, the frame image is in an on-off state.
5. The method according to claim 1, wherein in step S3, a plurality of continuous frames of images in the same state are stored as a single video, and the video is marked with a state, and the video is the video in the state separated from the ultrasonic video.
CN202110710640.1A 2021-06-25 2021-06-25 Classification extraction method for ultrasonic video Active CN113536964B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110710640.1A CN113536964B (en) 2021-06-25 2021-06-25 Classification extraction method for ultrasonic video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110710640.1A CN113536964B (en) 2021-06-25 2021-06-25 Classification extraction method for ultrasonic video

Publications (2)

Publication Number Publication Date
CN113536964A CN113536964A (en) 2021-10-22
CN113536964B true CN113536964B (en) 2023-09-26

Family

ID=78125927

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110710640.1A Active CN113536964B (en) 2021-06-25 2021-06-25 Classification extraction method for ultrasonic video

Country Status (1)

Country Link
CN (1) CN113536964B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937219B (en) * 2023-03-14 2023-05-12 合肥合滨智能机器人有限公司 Ultrasonic image part identification method and system based on video classification

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9306533D0 (en) * 1992-04-01 1993-05-19 Jones Kenneth S Multiple visual display from motion classifications
CA2178774A1 (en) * 1996-06-11 1997-12-12 Omid Mcdonald System and method for storing and displaying ultrasound images
US5920317A (en) * 1996-06-11 1999-07-06 Vmi Technologies Incorporated System and method for storing and displaying ultrasound images
CN102499715A (en) * 2011-11-23 2012-06-20 东南大学 Identical-trajectory ultrasonic image dynamic contrast system and contrast method thereof
CN105447872A (en) * 2015-12-03 2016-03-30 中山大学 Method for automatically identifying liver tumor type in ultrasonic image
CN107811652A (en) * 2017-10-18 2018-03-20 飞依诺科技(苏州)有限公司 The ultrasonic imaging method and system of adjust automatically parameter
CN109589141A (en) * 2018-12-28 2019-04-09 深圳开立生物医疗科技股份有限公司 A kind of ultrasound diagnosis assisting system, system and ultrasonic diagnostic equipment
RU2017143028A (en) * 2017-12-11 2019-06-11 Федеральное государственное бюджетное образовательное учреждение высшего образования "Юго-Западный государственный университет" (ЮЗГУ) Information-logical measuring system of decision support in the diagnosis of the prostate gland
CN110009640A (en) * 2018-11-20 2019-07-12 腾讯科技(深圳)有限公司 Handle method, equipment and the readable medium of heart video
CN110371108A (en) * 2019-06-14 2019-10-25 浙江零跑科技有限公司 Cartborne ultrasound wave radar and vehicle-mounted viewing system fusion method
CN112002407A (en) * 2020-07-17 2020-11-27 上海大学 Breast cancer diagnosis device and method based on ultrasonic video
CN112580613A (en) * 2021-02-24 2021-03-30 深圳华声医疗技术股份有限公司 Ultrasonic video image processing method, system, equipment and storage medium
WO2021061257A1 (en) * 2019-09-27 2021-04-01 Google Llc Automated maternal and prenatal health diagnostics from ultrasound blind sweep video sequences
CN112641466A (en) * 2020-12-31 2021-04-13 北京小白世纪网络科技有限公司 Ultrasonic artificial intelligence auxiliary diagnosis method and device
CN112686165A (en) * 2020-12-31 2021-04-20 百果园技术(新加坡)有限公司 Method and device for identifying target object in video, electronic equipment and storage medium
CN112786163A (en) * 2020-12-31 2021-05-11 北京小白世纪网络科技有限公司 Ultrasonic image processing and displaying method and system and storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9306533D0 (en) * 1992-04-01 1993-05-19 Jones Kenneth S Multiple visual display from motion classifications
CA2178774A1 (en) * 1996-06-11 1997-12-12 Omid Mcdonald System and method for storing and displaying ultrasound images
US5920317A (en) * 1996-06-11 1999-07-06 Vmi Technologies Incorporated System and method for storing and displaying ultrasound images
CN102499715A (en) * 2011-11-23 2012-06-20 东南大学 Identical-trajectory ultrasonic image dynamic contrast system and contrast method thereof
CN105447872A (en) * 2015-12-03 2016-03-30 中山大学 Method for automatically identifying liver tumor type in ultrasonic image
CN107811652A (en) * 2017-10-18 2018-03-20 飞依诺科技(苏州)有限公司 The ultrasonic imaging method and system of adjust automatically parameter
RU2017143028A (en) * 2017-12-11 2019-06-11 Федеральное государственное бюджетное образовательное учреждение высшего образования "Юго-Западный государственный университет" (ЮЗГУ) Information-logical measuring system of decision support in the diagnosis of the prostate gland
CN110009640A (en) * 2018-11-20 2019-07-12 腾讯科技(深圳)有限公司 Handle method, equipment and the readable medium of heart video
CN109589141A (en) * 2018-12-28 2019-04-09 深圳开立生物医疗科技股份有限公司 A kind of ultrasound diagnosis assisting system, system and ultrasonic diagnostic equipment
CN110371108A (en) * 2019-06-14 2019-10-25 浙江零跑科技有限公司 Cartborne ultrasound wave radar and vehicle-mounted viewing system fusion method
WO2021061257A1 (en) * 2019-09-27 2021-04-01 Google Llc Automated maternal and prenatal health diagnostics from ultrasound blind sweep video sequences
CN112002407A (en) * 2020-07-17 2020-11-27 上海大学 Breast cancer diagnosis device and method based on ultrasonic video
CN112641466A (en) * 2020-12-31 2021-04-13 北京小白世纪网络科技有限公司 Ultrasonic artificial intelligence auxiliary diagnosis method and device
CN112686165A (en) * 2020-12-31 2021-04-20 百果园技术(新加坡)有限公司 Method and device for identifying target object in video, electronic equipment and storage medium
CN112786163A (en) * 2020-12-31 2021-05-11 北京小白世纪网络科技有限公司 Ultrasonic image processing and displaying method and system and storage medium
CN112580613A (en) * 2021-02-24 2021-03-30 深圳华声医疗技术股份有限公司 Ultrasonic video image processing method, system, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
常规超声和超声造影检查诊断肝内胆管细胞癌;陈姣姣等;《使用肝脏病杂志》;第24卷(第2期);272-275 *
异构多支网络超声图像自动诊断方法;李昕昕等;《电子科技大学学报》;第50卷(第2期);214-224 *

Also Published As

Publication number Publication date
CN113536964A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
US8591414B2 (en) Skin state analyzing method, skin state analyzing apparatus, and computer-readable medium storing skin state analyzing program
Matzen et al. Data visualization saliency model: A tool for evaluating abstract data visualizations
CN109389129B (en) Image processing method, electronic device and storage medium
CN108229485B (en) Method and apparatus for testing user interface
US8094935B2 (en) Representative color extracting method and apparatus based on human color sense and data histogram distributions
US7949157B2 (en) Interpreting sign language gestures
CN110569874A (en) Garbage classification method and device, intelligent terminal and storage medium
CN108764352B (en) Method and device for detecting repeated page content
WO2017221592A1 (en) Image processing device, image processing method, and image processing program
WO2017009812A1 (en) System and method for structures detection and multi-class image categorization in medical imaging
US20100172576A1 (en) Color Analyzer And Calibration Tool
Hua et al. Automatic performance evaluation for video text detection
CN113536964B (en) Classification extraction method for ultrasonic video
Lee et al. Image analysis using machine learning for automated detection of hemoglobin H inclusions in blood smears-a method for morphologic detection of rare cells
US20230096719A1 (en) Scalable and high precision context-guided segmentation of histological structures including ducts/glands and lumen, cluster of ducts/glands, and individual nuclei in whole slide images of tissue samples from spatial multi-parameter cellular and sub-cellular imaging platforms
Vajravelu et al. Machine learning techniques to detect bleeding frame and area in wireless capsule endoscopy video
CN113554022A (en) Automatic acquisition method and device for detection test data of power instrument
US11430130B2 (en) Image processing method and computer-readable recording medium having recorded thereon image processing program
US11315251B2 (en) Method of operation of an artificial intelligence-equipped specimen scanning and analysis unit to digitally scan and analyze pathological specimen slides
CN112232390B (en) High-pixel large image identification method and system
CN115564750A (en) Intraoperative frozen slice image identification method, intraoperative frozen slice image identification device, intraoperative frozen slice image identification equipment and intraoperative frozen slice image storage medium
CN111083468B (en) Short video quality evaluation method and system based on image gradient
CN109919924B (en) Method suitable for cell digital processing of large-batch HE staining pictures
CN112861861A (en) Method and device for identifying nixie tube text and electronic equipment
CN111493829A (en) Method, system and equipment for determining mild cognitive impairment recognition parameters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant