CN117156182A - Medical remote intelligent monitoring method and system - Google Patents

Medical remote intelligent monitoring method and system Download PDF

Info

Publication number
CN117156182A
CN117156182A CN202311403577.2A CN202311403577A CN117156182A CN 117156182 A CN117156182 A CN 117156182A CN 202311403577 A CN202311403577 A CN 202311403577A CN 117156182 A CN117156182 A CN 117156182A
Authority
CN
China
Prior art keywords
ultrasonic image
image video
video frame
target
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311403577.2A
Other languages
Chinese (zh)
Other versions
CN117156182B (en
Inventor
邢虹
王彬
徐伟云
高冉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jianbin Pharmaceutical Technology Co ltd
Original Assignee
Beijing Jianbin Pharmaceutical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jianbin Pharmaceutical Technology Co ltd filed Critical Beijing Jianbin Pharmaceutical Technology Co ltd
Priority to CN202311403577.2A priority Critical patent/CN117156182B/en
Publication of CN117156182A publication Critical patent/CN117156182A/en
Application granted granted Critical
Publication of CN117156182B publication Critical patent/CN117156182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2347Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving video stream encryption
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4408Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video stream encryption, e.g. re-encrypting a decrypted video stream for redistribution in a home network

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Epidemiology (AREA)
  • Theoretical Computer Science (AREA)
  • Primary Health Care (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention relates to the technical field of video encryption transmission, in particular to a medical remote intelligent monitoring method and a system, wherein the method comprises the steps of firstly obtaining a target B-ultrasonic image video frame corresponding to a B-ultrasonic image video to be encrypted, and obtaining a correction importance degree of each texture feature window according to the similar situation of texture distribution of each texture feature window and a neighborhood corresponding position in each B-ultrasonic image video frame and the edge texture complexity degree in the target B-ultrasonic image video frame; and (3) combining pixel points of each pixel level on the basis of correcting the importance degree to obtain the importance degree of the pixel level, and encrypting the target B-mode ultrasonic image video frame according to the Huffman coding algorithm and the importance degree of the pixel level to obtain the encrypted B-mode ultrasonic image video. The encrypted B-ultrasonic image video obtained by the method has better encryption effect, so that the safety of medical remote monitoring according to the encrypted and transmitted B-ultrasonic image video is higher.

Description

Medical remote intelligent monitoring method and system
Technical Field
The invention relates to the technical field of video encryption transmission, in particular to a medical remote intelligent monitoring method and system.
Background
Medical remote monitoring is a method for remotely monitoring and managing the health condition of a patient according to an image video, for example, when a fetus in a pregnant woman belly is remotely monitored, a corresponding B-ultrasonic image video is needed, and in order to protect personal privacy and data safety, the B-ultrasonic image video is generally needed to be encrypted and then transmitted to a monitoring platform.
In the prior art, when the B-mode ultrasonic image video is encrypted, a Huffman coding algorithm which depends on frequency information is generally adopted for encryption; however, the huffman coding algorithm generally encrypts according to the frequency information of each pixel level in each B-ultrasonic image video frame, the corresponding encryption mode is simpler, and the risk of being cracked is larger, so that the effect of encrypting the B-ultrasonic image video is poorer, and the safety of medical remote monitoring according to the B-ultrasonic image video after encryption transmission is lower.
Disclosure of Invention
In order to solve the technical problem that the safety of medical remote monitoring according to the B-ultrasonic image video after encryption transmission is low due to the fact that the effect of encrypting the B-ultrasonic image video through a Huffman coding algorithm in the prior art is poor, the invention aims to provide a medical remote intelligent monitoring method and a system, and the adopted technical scheme is as follows:
the invention provides a medical remote intelligent monitoring method, which comprises the following steps:
acquiring at least two B-ultrasonic image video frames corresponding to B-ultrasonic image video to be encrypted; sequentially taking each B-mode ultrasonic image video frame as a target B-mode ultrasonic image video frame, and dividing the target B-mode ultrasonic image video frame into at least two texture feature windows according to the texture distribution condition in the target B-mode ultrasonic image video frame;
in time sequence, obtaining the regular change degree of each texture feature window according to the similar situation of the texture distribution of each texture feature window and the corresponding position of the neighborhood of each texture feature window of each B-ultrasonic image video frame after the target B-ultrasonic image video frame; obtaining the reference importance degree of each texture feature window corresponding to the target B-ultrasonic image video frame according to the edge texture complexity degree in the target B-ultrasonic image video frame;
obtaining the pixel level importance degree of each pixel level in the target B-ultrasonic image video frame according to the reference importance degree, the regular change degree and the position distribution characteristics of the pixel points corresponding to each pixel level in the target B-ultrasonic image video frame;
encrypting the target B-ultrasonic image video frame according to the Huffman coding algorithm and combining the pixel level importance degree to obtain an encrypted B-ultrasonic image video; and transmitting the encrypted B-ultrasonic image video to a monitoring platform for medical remote monitoring.
Further, the method for obtaining the degree of regular change comprises the following steps:
for any texture feature window:
taking the area of each B-ultrasonic image video frame at the corresponding position of the texture feature window as the texture feature area of each B-ultrasonic image video frame;
in time sequence, presetting a plurality of B-ultrasonic image video frames after the target B-ultrasonic image video frames, and taking the B-ultrasonic image video frames as contrast B-ultrasonic image video frames; taking the structural similarity coefficient between the texture feature area of each contrast B-ultrasonic image video frame and the texture feature area of the previous B-ultrasonic image video frame as a texture structure similarity feature value corresponding to each contrast B-ultrasonic image video frame under the corresponding position of a texture feature window;
obtaining local texture rule similarity corresponding to each contrast B-ultrasonic image video frame at the corresponding position of a texture feature window according to the similar situation of texture distribution in the neighborhood of the texture feature region of each contrast B-ultrasonic image video frame and the texture feature region of the previous B-ultrasonic image video frame;
and constructing a rule change degree calculation model according to the texture structure similarity characteristic value and the local texture rule similarity, and obtaining the rule change degree of each texture characteristic window corresponding to the target B ultrasonic image video frame through the rule change degree calculation model.
Further, the method for obtaining the local texture rule similarity comprises the following steps:
for any one contrast B-mode ultrasonic image video frame:
taking a pixel point corresponding to the centroid position of the texture feature area of the previous B-ultrasonic image video frame of the contrast B-ultrasonic image video frame as a corresponding central pixel point; taking each pixel point in the preset neighborhood range of the central pixel point as a neighborhood pixel point;
traversing each neighborhood pixel point through a sliding window with the same shape and size as the texture feature area to obtain a sliding window traversing area corresponding to each neighborhood pixel point;
and taking the maximum value of the structural similarity coefficient between all the sliding window traversing areas and the texture feature areas of the contrast B-ultrasonic image video frames as the local texture rule similarity corresponding to each contrast B-ultrasonic image video frame.
Further, the rule change degree calculation model includes:
the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>The first part corresponding to the video frame of the target B ultrasonic image>Degree of regular change of the individual texture windows, +.>The number of the contrast B-ultrasonic image video frames corresponding to the target B-ultrasonic image video frames,is->The +.>Local texture rule similarity corresponding to each contrast B-ultrasonic image video frame, < ->Is->The +.>Texture structure similar characteristic value corresponding to each contrast B ultrasonic image video frame, < ->Is an exponential function with a base of natural constant.
Further, the method for acquiring the reference importance degree comprises the following steps:
calculating the average frequency of each texture feature window in the target B ultrasonic image video frame through Fourier transformation; and taking the average frequency as the reference importance degree of each texture feature window corresponding to the target B-ultrasonic image video frame.
Further, the method for acquiring the pixel level importance degree comprises the following steps:
sequentially taking each pixel level as a target pixel level in all pixel levels of the target B ultrasonic image video frame;
obtaining the correction importance degree of each texture feature window corresponding to the target B-ultrasonic image video frame according to the reference importance degree and the regular change degree, wherein the reference importance degree and the regular change degree are positively correlated with the correction importance degree;
acquiring the number proportion of pixel points of a target pixel level in each texture feature window corresponding to a target B ultrasonic image video frame; taking the product of the number duty ratio and the correction importance degree as a reference duty degree of a target pixel level in each texture feature window; and taking the accumulated sum of the reference occupation importance degree of the target pixel level in all texture feature windows corresponding to the target B-ultrasonic image video frame as the pixel level importance degree of the target pixel level in the target B-ultrasonic image video frame.
Further, the method for acquiring the correction importance degree includes:
and taking the normalized value of the product between the reference importance level and the regular change level as the correction importance level of each texture feature window corresponding to the target B-ultrasonic image video frame.
Further, the method for acquiring the encrypted B-mode ultrasonic image video comprises the following steps:
weighting the pixel level importance degree of each pixel level in the target B-ultrasonic image video frame and the occurrence frequency of the corresponding pixel point, and then encrypting through a Huffman coding algorithm to obtain an encrypted target B-ultrasonic image video frame; and obtaining the encrypted B-ultrasonic image video according to all the encrypted B-ultrasonic image video frames.
Further, the method for acquiring the texture feature window comprises the following steps:
performing super-pixel segmentation on the target B-ultrasonic image video frame to obtain at least two super-pixel blocks; and taking the window of the corresponding area of each super pixel block as a texture characteristic window.
The invention also provides a medical remote intelligent monitoring system, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes any one of the steps of the medical remote intelligent monitoring method when executing the computer program.
The invention has the following beneficial effects:
considering that the basis of the Huffman coding algorithm for encrypting the B-mode ultrasonic video is pixel level frequency information of each B-mode ultrasonic video frame, and the occurrence frequencies and texture characteristics of different pixel levels in the B-mode ultrasonic video frame are different, the importance degrees of the corresponding pixel levels are different, and in order to ensure that the safety of the encrypted B-mode ultrasonic video is higher, the embodiment of the invention encrypts the target B-mode ultrasonic video frame according to the Huffman coding algorithm by combining the importance degrees of the pixel levels. In addition, in order to protect important texture information in a video frame, it is necessary to reduce the ciphertext length of the important texture information, considering that the longer the ciphertext length obtained according to the huffman coding algorithm is, the more susceptible to noise during transmission. In the B-mode video frame, the edge texture information is very important, and the important edge texture information is usually represented as high-frequency information in the image, so that the embodiment of the invention calculates the corresponding reference importance degree through the edge texture complexity degree in the B-mode video frame. However, considering that not all edge texture information is important information, the embodiment of the invention obtains the regular change degree of each texture feature window by calculating the similar situation of the texture distribution of each texture feature window and the corresponding position of the neighborhood thereof according to the influence of the change of the B-ultrasonic image video frame caused by the movement of the B-ultrasonic probe on the basis that the important edge texture information can change in the adjacent video frames. The pixel level importance degree of each pixel level is further adjusted through the correction importance degree obtained by combining the reference importance degree and the regular change degree, so that the effect of encrypting the B-ultrasonic image video is better, and the safety of medical remote monitoring according to the B-ultrasonic image video after encryption transmission is higher.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for remote intelligent monitoring of medical treatment according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description refers to the specific implementation, structure, characteristics and effects of a medical remote intelligent monitoring method and system according to the invention in combination with the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of a medical remote intelligent monitoring method and a system provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for remote intelligent monitoring of medical treatment according to an embodiment of the present invention is shown, where the method includes:
step S1: acquiring at least two B-ultrasonic image video frames corresponding to B-ultrasonic image video to be encrypted; and sequentially taking each B-mode ultrasonic image video frame as a target B-mode ultrasonic image video frame, and dividing the target B-mode ultrasonic image video frame into at least two texture feature windows according to the texture distribution condition in the target B-mode ultrasonic image video frame.
The embodiment of the invention aims to provide a medical remote intelligent monitoring method which is used for analyzing according to textures and pixel levels of B-ultrasonic image video frames corresponding to B-ultrasonic image videos to be encrypted to obtain the encrypted B-ultrasonic image videos, and transmitting the encrypted B-ultrasonic image videos to a monitoring platform for medical remote intelligent monitoring.
Therefore, the embodiment of the invention firstly acquires at least two B-ultrasonic image video frames corresponding to the B-ultrasonic image video to be encrypted. In the embodiment of the invention, the B-ultrasonic image video corresponding to the uterine cavity is obtained through the B-ultrasonic probe, so that the follow-up infant remote dynamic monitoring can be carried out through the encrypted B-ultrasonic image video, and an implementer can also obtain the B-ultrasonic image video under other environments according to specific implementation conditions, and further description is omitted herein. In the embodiment of the invention, the frame rate of the B-mode ultrasonic image video is set to be 60Hz, and an implementer can also change the frame rate according to the specific implementation environment.
The embodiment of the invention aims to encrypt the pixel-level frequency information by a Huffman coding algorithm on the basis of combining the importance degree of each pixel level, wherein the importance degree of the pixel level is influenced by the occurrence frequency and texture characteristics of the pixel level. Further, by combining the characteristic that important texture characteristic information changes along with the movement of the probe, the method further analyzes according to the change condition of the local area in the adjacent B-ultrasonic image video frame, so that the local area needs to be acquired first. According to the embodiment of the invention, each B-ultrasonic image video frame is sequentially taken as a target B-ultrasonic image video frame, and the target B-ultrasonic image video frame is divided into at least two texture feature windows according to the texture distribution condition in the target B-ultrasonic image video frame, wherein the texture feature windows are local areas for separating feature analysis.
Preferably, the method for acquiring the texture feature window comprises the following steps:
considering that the super-pixel segmentation can acquire the regions with similar characteristics and connectivity characteristics, if a window corresponding to the super-pixel blocks obtained by the super-pixel segmentation is used as a texture characteristic window, the subsequent characterization of the texture characteristic change condition in each texture characteristic window can be clearer, so that the embodiment of the invention performs super-pixel segmentation on the target B-ultrasonic image video frame to obtain at least two super-pixel blocks; and taking the window of the corresponding area of each super pixel block as a texture characteristic window. It should be noted that the super-pixel segmentation is a technical means known to those skilled in the art, and further description is omitted herein.
Step S2: in time sequence, obtaining the regular change degree of each texture feature window according to the similar situation of the texture distribution of each texture feature window and the corresponding position of the neighborhood of each texture feature window of each B-ultrasonic image video frame after the target B-ultrasonic image video frame; and obtaining the reference importance degree of each texture feature window corresponding to the target B-ultrasonic image video frame according to the edge texture complexity degree in the target B-ultrasonic image video frame.
Because the B ultrasonic probe for acquiring the B ultrasonic image video is continuously moved, the corresponding texture features of the texture feature windows of the important edge texture feature information are different in the corresponding texture features in different B ultrasonic image video frames, and analysis is performed according to the characteristic that the B ultrasonic probe moves to cause the texture information corresponding to the texture feature windows to spatially move, namely the texture feature window of each B ultrasonic image video frame is generally similar to the nearby area of the texture feature window of the adjacent B ultrasonic image video frame. Therefore, in the time sequence, the embodiment of the invention obtains the regular change degree of each texture feature window according to the similar situation of the texture distribution of each texture feature window and the corresponding position of the neighborhood of each texture feature window of each B-ultrasonic image video frame after the target B-ultrasonic image video frame.
Preferably, the method for acquiring the degree of regular change includes:
for any texture feature window: and taking the area of each B-mode ultrasonic image video frame at the corresponding position of the texture feature window as the texture feature area of each B-mode ultrasonic image video frame. That is, the position of the texture feature window is fixed in each B-mode video frame, but the texture information in the window changes with time.
In time sequence, presetting a plurality of B-ultrasonic image video frames after the target B-ultrasonic image video frames, and taking the B-ultrasonic image video frames as contrast B-ultrasonic image video frames; and taking the structural similarity coefficient between the texture feature area of each contrast B-mode ultrasonic image video frame and the texture feature area of the previous B-mode ultrasonic image video frame as a texture structure similarity feature value corresponding to each contrast B-mode ultrasonic image video frame under the corresponding position of the texture feature window. The structural similarity coefficient is based on the principle of human eye perception, and the similarity is calculated according to the brightness, contrast and structure of the image, so that the obtained texture structure similarity characteristic value is more sensitive to texture characteristic change. Because the B-ultrasonic probe is continuously moved, the similarity coefficient of the texture structure of the adjacent B-ultrasonic image video frames in the same texture feature window is usually smaller; therefore, the smaller the corresponding texture structure similarity characteristic value is, the higher the importance of the corresponding window is, namely the more important the texture information of the target B ultrasonic image video frame in the corresponding texture characteristic window is; for the edge texture feature information with lower importance in the target B-ultrasonic image video frame, when the B-ultrasonic probe moves, the corresponding edge texture is not changed or changed very slowly, the corresponding structural similarity coefficient is usually larger, namely the importance of the corresponding window is lower, and the corresponding regular change degree is usually smaller, such as the edge texture outside the uterine cavity. In the embodiment of the present invention, the preset number is set to 30, which corresponds to half of the frame rate of the B-mode ultrasound image video in the embodiment of the present invention, and the implementer can also adjust the size of the preset number according to the specific implementation environment. It should be noted that the calculation process and the obtaining method of the structural similarity coefficient are well known in the art, and are not further described herein.
In addition, because the structural similarity of texture feature windows with higher importance of texture information in different B-mode ultrasonic image video frames is lower, because the B-mode ultrasonic probe is continuously moved, when the texture information of the corresponding texture feature window is more important, an area with higher similarity with the texture feature area in the corresponding B-mode ultrasonic image video frame can be always found in the adjacent B-mode ultrasonic image video frames of each contrast B-mode ultrasonic image video frame according to the characteristic that the B-mode ultrasonic probe is continuously moved, and the area is near the texture feature window of the adjacent B-mode ultrasonic image video frame. Therefore, according to the situation that the texture distribution in the neighborhood of the texture feature area of each contrast B-ultrasonic image video frame is similar to that of the texture feature area of the previous B-ultrasonic image video frame, the local texture rule similarity corresponding to each contrast B-ultrasonic image video frame at the position corresponding to the texture feature window is obtained.
Preferably, the method for obtaining the local texture rule similarity comprises the following steps:
for any one contrast B-mode ultrasonic image video frame:
taking a pixel point corresponding to the centroid position of the texture feature area of the previous B-ultrasonic image video frame of the contrast B-ultrasonic image video frame as a corresponding central pixel point; taking each pixel point in a preset neighborhood range of the central pixel point as a neighborhood pixel point; traversing each neighborhood pixel point through a sliding window with the same shape and size as the texture feature area to obtain a sliding window traversing area corresponding to each neighborhood pixel point, wherein the neighborhood pixel point is a pixel point corresponding to the centroid of the corresponding sliding window traversing area. In order to avoid the influence of the shape of the sliding window traversing area on the calculation of the structural similarity coefficient, the embodiment of the invention traverses the sliding window with the same shape and size as the texture feature area, because the local texture rule similarity needs to be acquired through the structural similarity coefficient. In the embodiment of the invention, the minimum circumscribed rectangle corresponding to the texture feature area is obtained, the center pixel point is taken as the center, the double of the length of the minimum circumscribed rectangle is taken as the length of the preset neighborhood range, the double of the width of the minimum outside rectangle is taken as the width of the preset neighborhood range, the corresponding preset neighborhood range with the rectangular shape is obtained, and an implementer can set the preset neighborhood range by himself according to the specific implementation environment, such as 25 neighborhood, sixteen neighborhood and the like, and no further description is given here.
And taking the maximum value of the structural similarity coefficient between all the sliding window traversing areas and the texture feature areas of the contrast B-ultrasonic image video frames as the local texture rule similarity corresponding to each contrast B-ultrasonic image video frame. For each contrast B-mode video frame, when the importance of the texture information corresponding to the corresponding texture feature window is higher, the sliding window traversing area corresponding to the maximum value of the structural similarity coefficient is generally the same as or has higher similarity to the texture feature area, that is, the sliding window traversing area and the texture feature area are the areas with the same texture but different spatial positions. And when the importance degree of the texture information corresponding to the texture feature window is higher, the sliding window traversing area corresponding to the maximum value of the structural similarity coefficient is more likely to be the area after the texture feature area is translated, and the similarity of the corresponding local texture rule is higher.
By combining the local texture rule similarity and the texture structure similarity characteristic value and the relation of the texture information importance of the texture characteristic window, the embodiment of the invention constructs a rule change degree calculation model according to the texture structure similarity characteristic value and the local texture rule similarity, and obtains the rule change degree of each texture characteristic window corresponding to the target B ultrasonic image video frame through the rule change degree calculation model. As the texture information in the texture feature window with higher texture information importance can show a certain rule along with the movement of the B ultrasonic probe, the embodiment of the invention characterizes the texture information importance of each texture feature window through the degree of rule change.
Preferably, the law change degree calculation model includes:
the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>The first part corresponding to the video frame of the target B ultrasonic image>Degree of regular change of the individual texture windows, +.>The number of the contrast B-ultrasonic image video frames corresponding to the target B-ultrasonic image video frames,is->The +.>Local texture rule similarity corresponding to each contrast B-ultrasonic image video frame, < ->Is->The +.>Texture structure similar characteristic value corresponding to each contrast B ultrasonic image video frame, < ->Is an exponential function with a base of natural constant. Further according to the first +.>Method for acquiring regular change degree of each texture feature window and acquiring target B ultrasonic imageThe degree of regular change of each texture feature window corresponding to the video frame.
For each contrast B-ultrasonic image video frame under each texture feature window, as the similarity of the corresponding local texture rule is larger, when the similarity feature value of the texture structure is larger, the importance of the corresponding texture feature window is higher, namely the importance of texture information in the texture feature window corresponding to the target B-ultrasonic image video frame is higher; the degree of regular change characterizes the importance of texture information of the corresponding texture feature window, so that the embodiment of the invention firstly calculates the product of the negative correlation mapping value of the texture structure similarity feature value and the local texture rule similarity, and further calculates the average value of the product to obtain the degree of regular change of the texture feature window corresponding to the target B-ultrasonic image video frame by considering that the obtaining of the degree of regular change needs to be combined with the texture information of the texture feature window under each contrast B-ultrasonic image video frame.
So far, the regular change degree of each texture feature window corresponding to the target B-ultrasonic image video frame is obtained, and the importance degree of the corresponding window can be represented to a certain degree by further considering the edge texture information quantity in each texture feature window. According to the embodiment of the invention, the reference importance degree of each texture feature window corresponding to the target B-ultrasonic image video frame is obtained according to the edge texture complexity degree in the target B-ultrasonic image video frame.
Preferably, the method for acquiring the reference importance degree includes:
calculating the average frequency of each texture feature window in the target B ultrasonic image video frame through Fourier transformation; and taking the average frequency as the reference importance degree of each texture feature window corresponding to the target B-ultrasonic image video frame. The higher the image frequency window in the target B ultrasonic image video frame is, the more and more complex the corresponding edge texture information is, and the higher the importance of the corresponding texture feature window is. It should be noted that, the method for calculating the image window frequency through fourier transform is a technical means well known to those skilled in the art, and will not be further described herein.
Step S3: and obtaining the pixel level importance degree of each pixel level in the target B-ultrasonic image video frame according to the reference importance degree, the regular change degree and the position distribution characteristics of the pixel points corresponding to each pixel level in the target B-ultrasonic image video frame.
The reference importance degree and the regular change degree can represent the importance of the corresponding texture feature window in the target B-ultrasonic image video frame to a certain extent, so that the embodiment of the invention combines the reference importance degree and the regular change degree for analysis, and further introduces the frequency information of each pixel level for analysis on the basis of combining the reference importance degree and the regular change degree according to the purpose of the embodiment of the invention, thereby obtaining the importance degree of each pixel level in the target B-ultrasonic image video frame required by the embodiment of the invention. Therefore, the embodiment of the invention obtains the pixel level importance degree of each pixel level in the target B-ultrasonic image video frame according to the reference importance degree, the regular change degree and the position distribution characteristics of the pixel points corresponding to each pixel level in the target B-ultrasonic image video frame.
Preferably, the method for acquiring the importance degree of the pixel level includes:
sequentially taking each pixel level as a target pixel level in all pixel levels of the target B ultrasonic image video frame;
obtaining the correction importance degree of each texture feature window corresponding to the target B-ultrasonic image video frame according to the reference importance degree and the regular change degree, wherein the reference importance degree and the regular change degree are positively correlated with the correction importance degree;
preferably, the method for acquiring the correction importance degree includes:
and taking the normalized value of the product of the reference importance level and the regular change level as the correction importance level of each texture feature window corresponding to the target B ultrasonic image video frame. In the embodiment of the invention, the corresponding texture feature window in the target B-ultrasonic image video frame is characterized by correcting the importance degree, and the corresponding reference importance degree is the importance degree corrected by the regular change degree. Therefore, the reference importance degree and the regular change degree are positively correlated with the correction importance degree. It should be noted that, the implementer may also obtain the correction importance degree by other methods, for example, the sum between the normalized value of the reference importance degree and the normalized value of the regular variation degree is used as the correction importance degree, which is not further described herein.
In the embodiment of the invention, each texture feature window corresponding to the target B-ultrasonic image video frame is sequentially used as the first texture feature windowA texture window, the first +.>The method for obtaining the correction importance degree of each texture feature window is expressed as the following formula:
wherein,is the +.f. of the target B ultrasonic image video frame>Correction importance of the individual texture windows, +.>Is the +.f. of the target B ultrasonic image video frame>Reference importance of each texture feature window, +.>Is the +.f. of the target B ultrasonic image video frame>Degree of regular change of the individual texture windows, +.>In the embodiment of the invention, all normalization functions adopt linear normalizationAs a result, the practitioner can select other normalization methods according to the implementation environment, and further description is omitted herein.
And further acquiring the number ratio of the pixel points of the target pixel level in each texture feature window corresponding to the target B-ultrasonic image video frame. Because the number of pixels in different texture feature windows in different implementation environments is greatly different, in order to avoid dimension influence, the embodiment of the invention analyzes on the basis of the number of pixels in a ratio, namely the ratio of the number of target pixel levels in each texture feature window to the total number of pixels in each texture feature window.
For each pixel level in the target B-mode ultrasonic image video frame, the corresponding correction importance degree of different texture feature windows is different, so that when the number of corresponding pixel levels in the texture feature window with larger correction importance degree is more, the importance of the corresponding pixel levels is higher. The product of the number duty cycle and the correction importance is further taken as the reference duty cycle of the target pixel level in each texture feature window.
And further combining the reference occupation importance degree of all texture feature windows in the target B-ultrasonic image video frame, and taking the accumulated sum of the reference occupation importance degree of the target pixel level in all texture feature windows corresponding to the target B-ultrasonic image video frame as the pixel level importance degree of the target pixel level in the target B-ultrasonic image video frame.
In the embodiment of the invention, the target pixel level is taken as the firstThe pixel level importance degree obtaining method of the target pixel level is expressed as the following formula:
wherein,pixel level importance level at target pixel level,/>For the number of texture feature windows in the target B-mode ultrasound image video frame,/the number of texture feature windows in the target B>Is the +.f in the video frame of the target B ultrasonic image>Correction importance of the individual texture windows, +.>Is the +.f. of the target B ultrasonic image video frame>The number of pixels at the target pixel level in the texture feature window, is->Is the +.f. of the target B ultrasonic image video frame>Total number of pixels in each texture feature window, +.>Namely the +.f of the video frame of the target B ultrasonic image>The number of target pixel levels in each texture feature window is a ratio.
Step S4: encrypting the target B-ultrasonic image video frame according to the Huffman coding algorithm and combining the pixel level importance degree to obtain an encrypted B-ultrasonic image video; and transmitting the encrypted B-ultrasonic image video to a monitoring platform for medical remote monitoring.
So far, the importance degree of each pixel level in the target B ultrasonic image video frame is obtained. In order to further improve the safety of the encrypted B-mode ultrasonic image video, the embodiment of the invention encrypts the target B-mode ultrasonic image video frame according to the Huffman coding algorithm and the pixel level importance degree to obtain the encrypted B-mode ultrasonic image video.
Preferably, the method for acquiring the encrypted B-mode ultrasonic image video comprises the following steps:
because the traditional Huffman coding algorithm is based on the occurrence frequency of pixel level for encryption, and according to the principle of the Huffman algorithm, the closer to the pixel level of the Huffman tree bottom layer, the longer the corresponding ciphertext length is, the more easily the noise is interfered in the transmission process, so in order to avoid the condition that the more important pixel level is interfered by the noise to cause information loss, the shorter the ciphertext length corresponding to the pixel level with high importance degree is required. Therefore, the pixel level importance degree of each pixel level in the target B-ultrasonic image video frame and the occurrence frequency of the corresponding pixel point are weighted, and then are encrypted through a Huffman coding algorithm, so that the encrypted target B-ultrasonic image video frame is obtained. That is, the frequency of occurrence of the pixel point of each pixel level in the conventional huffman coding algorithm is replaced by the weighted numerical value to perform huffman coding. And further combining according to all the encrypted B-mode ultrasonic image video frames to obtain the encrypted B-mode ultrasonic image video. It should be noted that, the huffman coding algorithm is well known in the art, and is not further limited and described herein.
The embodiment of the invention transmits the encrypted B-ultrasonic image video to the monitoring platform for medical remote monitoring. In the embodiment of the invention, the encrypted B-ultrasonic image video is transmitted to a medical information system, and medical remote intelligent monitoring is performed through the medical information system.
In summary, the method includes the steps of firstly obtaining a target B-mode ultrasonic image video frame corresponding to the B-mode ultrasonic image video to be encrypted, and obtaining correction importance degree of each texture feature window according to similar texture distribution conditions of each texture feature window and corresponding positions of adjacent areas of each texture feature window in each B-mode ultrasonic image video frame and edge texture complexity degree in the target B-mode ultrasonic image video frame; and (3) combining pixel points of each pixel level on the basis of correcting the importance degree to obtain the importance degree of the pixel level, and encrypting the target B-mode ultrasonic image video frame according to the Huffman coding algorithm and the importance degree of the pixel level to obtain the encrypted B-mode ultrasonic image video. The encrypted B-ultrasonic image video obtained by the method has better encryption effect, so that the safety of medical remote monitoring according to the encrypted and transmitted B-ultrasonic image video is higher.
The invention also provides a medical remote intelligent monitoring system, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes any one of the steps of the medical remote intelligent monitoring method when executing the computer program.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (10)

1. A medical remote intelligent monitoring method, characterized in that the method comprises the following steps:
acquiring at least two B-ultrasonic image video frames corresponding to B-ultrasonic image video to be encrypted; sequentially taking each B-mode ultrasonic image video frame as a target B-mode ultrasonic image video frame, and dividing the target B-mode ultrasonic image video frame into at least two texture feature windows according to the texture distribution condition in the target B-mode ultrasonic image video frame;
in time sequence, obtaining the regular change degree of each texture feature window according to the similar situation of the texture distribution of each texture feature window and the corresponding position of the neighborhood of each texture feature window of each B-ultrasonic image video frame after the target B-ultrasonic image video frame; obtaining the reference importance degree of each texture feature window corresponding to the target B-ultrasonic image video frame according to the edge texture complexity degree in the target B-ultrasonic image video frame;
obtaining the pixel level importance degree of each pixel level in the target B-ultrasonic image video frame according to the reference importance degree, the regular change degree and the position distribution characteristics of the pixel points corresponding to each pixel level in the target B-ultrasonic image video frame;
encrypting the target B-ultrasonic image video frame according to the Huffman coding algorithm and combining the pixel level importance degree to obtain an encrypted B-ultrasonic image video; and transmitting the encrypted B-ultrasonic image video to a monitoring platform for medical remote monitoring.
2. The method for remotely and intelligently monitoring medical treatment according to claim 1, wherein the method for acquiring the degree of regular change comprises the following steps:
for any texture feature window:
taking the area of each B-ultrasonic image video frame at the corresponding position of the texture feature window as the texture feature area of each B-ultrasonic image video frame;
in time sequence, presetting a plurality of B-ultrasonic image video frames after the target B-ultrasonic image video frames, and taking the B-ultrasonic image video frames as contrast B-ultrasonic image video frames; taking the structural similarity coefficient between the texture feature area of each contrast B-ultrasonic image video frame and the texture feature area of the previous B-ultrasonic image video frame as a texture structure similarity feature value corresponding to each contrast B-ultrasonic image video frame under the corresponding position of a texture feature window;
obtaining local texture rule similarity corresponding to each contrast B-ultrasonic image video frame at the corresponding position of a texture feature window according to the similar situation of texture distribution in the neighborhood of the texture feature region of each contrast B-ultrasonic image video frame and the texture feature region of the previous B-ultrasonic image video frame;
and constructing a rule change degree calculation model according to the texture structure similarity characteristic value and the local texture rule similarity, and obtaining the rule change degree of each texture characteristic window corresponding to the target B ultrasonic image video frame through the rule change degree calculation model.
3. The medical remote intelligent monitoring method according to claim 2, wherein the method for obtaining the local texture rule similarity comprises the following steps:
for any one contrast B-mode ultrasonic image video frame:
taking a pixel point corresponding to the centroid position of the texture feature area of the previous B-ultrasonic image video frame of the contrast B-ultrasonic image video frame as a corresponding central pixel point; taking each pixel point in the preset neighborhood range of the central pixel point as a neighborhood pixel point;
traversing each neighborhood pixel point through a sliding window with the same shape and size as the texture feature area to obtain a sliding window traversing area corresponding to each neighborhood pixel point;
and taking the maximum value of the structural similarity coefficient between all the sliding window traversing areas and the texture feature areas of the contrast B-ultrasonic image video frames as the local texture rule similarity corresponding to each contrast B-ultrasonic image video frame.
4. The medical remote intelligent monitoring method according to claim 2, wherein the rule change degree calculation model comprises:
the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>The first part corresponding to the video frame of the target B ultrasonic image>Degree of regular change of the individual texture windows, +.>For the number of contrast B-ultrasonic image video frames corresponding to the target B-ultrasonic image video frame,/the number of contrast B-ultrasonic image video frames corresponding to the target B-ultrasonic image video frame is/are increased>Is->The +.>Local texture rule similarity corresponding to each contrast B-ultrasonic image video frame, < ->Is the firstThe +.>Texture structure similar characteristic value corresponding to each contrast B ultrasonic image video frame, < ->Is an exponential function with a base of natural constant.
5. The method for remotely and intelligently monitoring medical treatment according to claim 1, wherein the method for acquiring the reference importance level comprises the following steps:
calculating the average frequency of each texture feature window in the target B ultrasonic image video frame through Fourier transformation; and taking the average frequency as the reference importance degree of each texture feature window corresponding to the target B-ultrasonic image video frame.
6. The method for remotely and intelligently monitoring medical treatment according to claim 1, wherein the method for acquiring the pixel level importance degree comprises the following steps:
sequentially taking each pixel level as a target pixel level in all pixel levels of the target B ultrasonic image video frame;
obtaining the correction importance degree of each texture feature window corresponding to the target B-ultrasonic image video frame according to the reference importance degree and the regular change degree, wherein the reference importance degree and the regular change degree are positively correlated with the correction importance degree;
acquiring the number proportion of pixel points of a target pixel level in each texture feature window corresponding to a target B ultrasonic image video frame; taking the product of the number duty ratio and the correction importance degree as a reference duty degree of a target pixel level in each texture feature window; and taking the accumulated sum of the reference occupation importance degree of the target pixel level in all texture feature windows corresponding to the target B-ultrasonic image video frame as the pixel level importance degree of the target pixel level in the target B-ultrasonic image video frame.
7. The method for remotely and intelligently monitoring medical treatment according to claim 6, wherein the method for acquiring the correction importance degree comprises the following steps:
and taking the normalized value of the product between the reference importance level and the regular change level as the correction importance level of each texture feature window corresponding to the target B-ultrasonic image video frame.
8. The medical remote intelligent monitoring method according to claim 1, wherein the method for acquiring the encrypted B-mode ultrasonic image video comprises the following steps:
weighting the pixel level importance degree of each pixel level in the target B-ultrasonic image video frame and the occurrence frequency of the corresponding pixel point, and then encrypting through a Huffman coding algorithm to obtain an encrypted target B-ultrasonic image video frame; and obtaining the encrypted B-ultrasonic image video according to all the encrypted B-ultrasonic image video frames.
9. The medical remote intelligent monitoring method according to claim 1, wherein the method for acquiring the texture feature window comprises the following steps:
performing super-pixel segmentation on the target B-ultrasonic image video frame to obtain at least two super-pixel blocks; and taking the window of the corresponding area of each super pixel block as a texture characteristic window.
10. A medical remote intelligent monitoring system comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, implements the steps of the method according to any one of claims 1 to 9.
CN202311403577.2A 2023-10-27 2023-10-27 Medical remote intelligent monitoring method and system Active CN117156182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311403577.2A CN117156182B (en) 2023-10-27 2023-10-27 Medical remote intelligent monitoring method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311403577.2A CN117156182B (en) 2023-10-27 2023-10-27 Medical remote intelligent monitoring method and system

Publications (2)

Publication Number Publication Date
CN117156182A true CN117156182A (en) 2023-12-01
CN117156182B CN117156182B (en) 2024-01-23

Family

ID=88908367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311403577.2A Active CN117156182B (en) 2023-10-27 2023-10-27 Medical remote intelligent monitoring method and system

Country Status (1)

Country Link
CN (1) CN117156182B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160203612A1 (en) * 2015-01-08 2016-07-14 Thomson Licensing Method and apparatus for generating superpixels for multi-view images
US20180253839A1 (en) * 2015-09-10 2018-09-06 Magentiq Eye Ltd. A system and method for detection of suspicious tissue regions in an endoscopic procedure
KR20210071358A (en) * 2019-12-06 2021-06-16 주식회사 유피웍스 System and method for encrypting super-pixels
CN113852822A (en) * 2021-09-23 2021-12-28 长春理工大学 Encrypted domain image steganography method and system based on Huffman coding
CN115225897A (en) * 2022-07-14 2022-10-21 河南职业技术学院 Video multi-level encryption transmission method based on Huffman coding
CN115278369A (en) * 2022-06-27 2022-11-01 河南天一智能信息有限公司 Data transmission method of medical remote monitoring system
WO2023092342A1 (en) * 2021-11-24 2023-06-01 上海健康医学院 Medical image privacy protection method based on ciphertext image restoration

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160203612A1 (en) * 2015-01-08 2016-07-14 Thomson Licensing Method and apparatus for generating superpixels for multi-view images
US20180253839A1 (en) * 2015-09-10 2018-09-06 Magentiq Eye Ltd. A system and method for detection of suspicious tissue regions in an endoscopic procedure
KR20210071358A (en) * 2019-12-06 2021-06-16 주식회사 유피웍스 System and method for encrypting super-pixels
CN113852822A (en) * 2021-09-23 2021-12-28 长春理工大学 Encrypted domain image steganography method and system based on Huffman coding
WO2023092342A1 (en) * 2021-11-24 2023-06-01 上海健康医学院 Medical image privacy protection method based on ciphertext image restoration
CN115278369A (en) * 2022-06-27 2022-11-01 河南天一智能信息有限公司 Data transmission method of medical remote monitoring system
CN115225897A (en) * 2022-07-14 2022-10-21 河南职业技术学院 Video multi-level encryption transmission method based on Huffman coding

Also Published As

Publication number Publication date
CN117156182B (en) 2024-01-23

Similar Documents

Publication Publication Date Title
CN110060313B (en) Image artifact correction method and system
KR100856042B1 (en) Ultrasound imaging system for extracting volume of object from ultrasound image and method for the same
CN108288058B (en) Improved wavelet threshold knee joint swing signal denoising algorithm
US20230042000A1 (en) Apparatus and method for quantification of the mapping of the sensory areas of the brain
CN113034641B (en) Sparse angle CT reconstruction method based on wavelet multi-scale convolution feature coding
CN111640111A (en) Medical image processing method, device and storage medium
CN106780413B (en) Image enhancement method and device
Brindha et al. Region based lossless compression for digital images in telemedicine application
JPWO2019216417A1 (en) Model setting device, blood pressure measuring device, and model setting method
CN116363021A (en) Intelligent collection system for nursing and evaluating wound patients
CN117156182B (en) Medical remote intelligent monitoring method and system
CN109389567B (en) Sparse filtering method for rapid optical imaging data
CN117649357B (en) Ultrasonic image processing method based on image enhancement
CN117201800B (en) Medical examination big data compression storage system based on space redundancy
Lalli et al. A development of knowledge-based inferences system for detection of breast cancer on thermogram images
Hussin et al. A visual enhancement quality of digital medical image based on bat optimization
KR20100115143A (en) Method for ventricle segmentation using radial threshold determination
Cao et al. Medical image fusion based on GPU accelerated nonsubsampled shearlet transform and 2D principal component analysis
KR101581688B1 (en) Apparatus and method of adaptive transmission for medical image
Wei et al. Quantification of motion artifacts in 4DCT using global Fourier analysis
Asokan et al. Medical image fusion using stationary wavelet transform with different wavelet families
US20110075888A1 (en) Computer readable medium, systems and methods for improving medical image quality using motion information
CN110010228A (en) A kind of facial skin rendering algorithm based on image analysis
Yang et al. Magnetic resonance imaging under image enhancement algorithm to analyze the clinical value of placement of drainage tube on incision healing after hepatobiliary surgery
Kaur et al. Hybrid discrete wavelet enhancement model for brain NCCT images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant