CN114863317A - Method for processing endoscope image, image platform, computer device and medium - Google Patents

Method for processing endoscope image, image platform, computer device and medium Download PDF

Info

Publication number
CN114863317A
CN114863317A CN202210341026.7A CN202210341026A CN114863317A CN 114863317 A CN114863317 A CN 114863317A CN 202210341026 A CN202210341026 A CN 202210341026A CN 114863317 A CN114863317 A CN 114863317A
Authority
CN
China
Prior art keywords
frame picture
image
frame
picture
image stabilization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210341026.7A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Microport Medbot Group Co Ltd
Original Assignee
Shanghai Weimi Medical Instrument Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Weimi Medical Instrument Co ltd filed Critical Shanghai Weimi Medical Instrument Co ltd
Priority to CN202210341026.7A priority Critical patent/CN114863317A/en
Publication of CN114863317A publication Critical patent/CN114863317A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Endoscopes (AREA)

Abstract

The embodiment of the specification provides an endoscope image processing method, an image platform, a computer device and a medium. The method comprises the following steps: acquiring an image signal acquired by an endoscope; the image signal comprises a plurality of frame pictures; performing correlation analysis on the frame picture to determine a target image signal; the target image signal comprises a frame picture needing image stabilization; and carrying out image stabilization processing on the target image signal to obtain an image stabilization processing result. The problem that the requirement of a doctor on image stabilization in the endoscope operation process cannot be effectively matched aiming at sudden shaking and the problem of poor adaptability in the prior art can be solved by utilizing the embodiment of the specification.

Description

Method for processing endoscope image, image platform, computer device and medium
Technical Field
The present application relates to the field of endoscopic surgery automation technologies, and in particular, to a method for processing an endoscopic image, an image platform, a computer device, and a medium.
Background
In an endoscope operation, the shaking of the endoscope may cause interference to the acquired image signals, so that the blurring and other unclear problems occur on the image, and the operation performed by a doctor is influenced.
At present, the mechanical image stabilization technology is mainly utilized to provide an image stabilization function from the structural aspect of an endoscope. However, due to the limitation of the precision of the mechanical structure, the method cannot effectively match the requirement of a doctor on image stabilization in the process of operating the endoscope for sudden shaking, and is not strong in adaptability.
Therefore, there is a need for a solution to the above technical problems.
Disclosure of Invention
The embodiment of the specification provides an endoscope image processing method, an image platform, computer equipment and a medium, and can solve the problems that the requirement of a doctor on image stabilization in the process of operating an endoscope cannot be effectively matched and the adaptability is not strong aiming at sudden shaking in the prior art.
A method of processing endoscopic images, comprising: acquiring an image signal acquired by an endoscope; the image signal comprises a plurality of frame pictures; performing correlation analysis on the frame picture to determine a target image signal; the target image signal comprises a frame picture needing image stabilization; and carrying out image stabilization processing on the target image signal to obtain an image stabilization processing result.
In one embodiment, performing correlation analysis on the frame pictures to determine a target image signal includes: acquiring a first frame picture and a second frame picture from the image signal; the second frame picture is adjacent to the first frame picture and is positioned behind the first frame picture; calculating the similarity between the first frame picture and the second frame picture; determining the second frame of picture as a frame of picture needing image stabilization under the condition that the similarity is smaller than a preset threshold value; and determining a target image signal based on all frame pictures needing image stabilization.
In one embodiment, calculating the similarity between the first frame picture and the second frame picture comprises: extracting feature points of each frame of picture by using an SIFT algorithm to obtain feature information corresponding to each frame of picture; and calculating the similarity between the first frame picture and the second frame picture based on the characteristic information corresponding to the first frame picture and the characteristic information corresponding to the second frame picture.
In one embodiment, performing correlation analysis on the frame pictures to determine a target image signal includes: acquiring an angular velocity corresponding to a first frame of picture in the image signal; wherein the angular velocity is acquired by a gyroscope disposed on the endoscope; determining the first frame picture as a frame picture needing image stabilization under the condition that the angular speed is less than or equal to a preset threshold value; and determining a target image signal based on all frame pictures needing image stabilization.
In one embodiment, the image stabilization processing on the target image signal to obtain an image stabilization processing result includes: dividing the target image signal into different time sequence segments; carrying out image stabilization processing on the frame pictures in each time sequence section to obtain an image stabilization processing result corresponding to each time sequence section; and splicing the image stabilization processing results corresponding to each time sequence section to obtain the image stabilization processing result corresponding to the target image signal.
In one embodiment, dividing the target image signal into different time series segments comprises: acquiring angular velocity corresponding to each frame of picture in the target image signal; determining a reference frame picture in the target image signal according to the relation between the angular speed and a specified threshold value; determining a frame picture from a first reference frame picture to a first frame picture as a first time sequence period; the first frame picture is adjacent to the second reference frame picture and is positioned in front of the second reference frame picture; the second reference frame picture is located behind the first reference frame picture; determining frame pictures from the second reference frame picture to the second frame picture as a second time-series segment; the second frame picture is adjacent to the third reference frame picture and is positioned in front of the third reference frame picture; the third reference frame picture is located after the second reference frame picture.
In one embodiment, determining a reference frame picture in the target image signal based on the relationship of angular velocity to a specified threshold comprises: determining a frame picture corresponding to the first angular speed as a first reference frame picture; the first angular velocity represents a first angular velocity in the target image signal that is less than a specified threshold; determining a frame picture corresponding to the second angular velocity as a second reference frame picture; the second angular velocity is smaller than a specified threshold, and a frame picture corresponding to the second angular velocity is positioned behind a frame picture corresponding to the first angular velocity; at least one first key frame picture is included between the frame picture corresponding to the second angular velocity and the frame picture corresponding to the first angular velocity, and the angular velocity corresponding to the first key frame picture is not less than a specified threshold value; between the second reference frame picture and any one first key frame picture, no frame picture with angular speed less than a specified threshold exists; determining a frame picture corresponding to the third angular velocity as a third reference frame picture; the third angular velocity is smaller than a specified threshold value, and a frame picture corresponding to the third angular velocity is positioned behind a frame picture corresponding to the second angular velocity; at least one second key frame picture is included between the frame picture corresponding to the third angular velocity and the frame picture corresponding to the second angular velocity, and the angular velocity corresponding to the second key frame picture is not less than a specified threshold value; and between the third reference frame picture and any one second key frame picture, no frame picture with the angular speed less than a specified threshold exists.
In an embodiment, the image stabilization processing on the frame picture in each time sequence segment to obtain an image stabilization processing result corresponding to each time sequence segment includes: extracting the characteristic points of each frame of picture in the target time sequence segment to obtain the characteristic information corresponding to each frame of picture; determining an image stabilization processing result corresponding to each frame of picture based on the characteristic information corresponding to each frame of picture; and splicing the image stabilization processing results corresponding to the frames to obtain the image stabilization processing results corresponding to the target time sequence section.
In one embodiment, determining an image stabilization processing result corresponding to each frame picture based on the feature information corresponding to each frame picture includes: calculating translation information of the kth frame picture relative to the (k-1) th frame picture according to the characteristic information corresponding to the kth frame picture and the characteristic information corresponding to the (k-1) th frame picture; k is more than or equal to 1; determining translation information of the kth frame picture relative to a reference frame picture based on all translation information obtained before the kth frame picture; translating the kth frame picture based on translation information of the kth frame picture relative to a reference frame picture to obtain a preliminary image stabilization result of the kth frame picture; and processing the preliminary image stabilization result of the kth frame picture by using a preset transformation matrix to obtain an image stabilization processing result corresponding to the kth frame picture.
In one embodiment, after obtaining the image stabilization processing result corresponding to the target time sequence segment, the method further includes: acquiring a first frame picture and a second frame picture from the image stabilization processing result; the second frame picture is adjacent to the first frame picture and is positioned behind the first frame picture; calculating the similarity between the first frame picture and the second frame picture; and replacing the second frame picture with the reference frame picture in the target time sequence segment under the condition that the similarity is smaller than a preset threshold value.
In one embodiment, each frame picture corresponds to a time identifier for identifying the position of the frame picture in the image signal; after the image stabilization processing result is obtained, the method further comprises the following steps: replacing the frame picture of the corresponding position in the image signal by using the image stabilization processing result based on the time identification to obtain an endoscope image after image stabilization processing; and outputting the endoscope image after image stabilization processing.
An image platform, comprising: the method comprises an endoscope and an image processing device, wherein the image processing device is used for processing image signals acquired by the endoscope by adopting the steps of any method embodiment in the embodiment of the specification.
A computer device comprising a processor and a memory for storing processor-executable instructions that when executed by the processor perform the steps of any one of the method embodiments of the present specification.
A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, perform the steps of any one of the method embodiments of the present specification.
By acquiring an image signal including a plurality of frame pictures acquired by an endoscope and performing correlation analysis on the frame pictures, the image signal acquired by the endoscope can be divided into a video segment requiring image stabilization and a video segment not requiring image stabilization, so that resource allocation of a computer can be optimized, and occupation of image stabilization resources is reduced. After image signals collected by the endoscope are divided into a video section which needs image stabilization and a video section which does not need image stabilization, the problem that in the prior art, aiming at sudden jitter, the requirement of a doctor on image stabilization in the process of operating the endoscope cannot be effectively matched and the adaptability is not strong can be solved by carrying out image stabilization on target image signals.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification, are incorporated in and constitute a part of this specification, and are not intended to limit the specification. In the drawings:
FIG. 1 is a schematic flow chart diagram illustrating one embodiment of a method for processing endoscopic images provided herein;
FIG. 2 is a schematic diagram of an endoscope provided herein for acquiring image signals;
fig. 3 is a schematic flow chart of extracting frame feature points by using a SIFT algorithm according to the present disclosure;
fig. 4 is a schematic diagram illustrating detection of extreme points in the SIFT algorithm provided in the present specification;
FIG. 5 is a schematic diagram of a target image signal divided into time-series segments provided herein;
fig. 6 is a schematic diagram for determining a translation vector between adjacent frame pictures provided in the present specification;
fig. 7 is a diagram illustrating motion compensation for each frame picture in a target time sequence segment according to the present disclosure;
FIG. 8 is a schematic diagram of the reverse compensation of the results after preliminary image stabilization provided herein;
FIG. 9 is a schematic illustration of an output stabilized endoscopic image provided herein;
FIG. 10 is a block diagram of an embodiment of an apparatus for processing endoscopic images provided by the present specification;
FIG. 11 is a schematic diagram of an image platform provided herein;
FIG. 12 is a schematic illustration of a physician's console provided herein;
FIG. 13 is a schematic illustration of a surgical platform provided herein;
fig. 14 is a block diagram showing a hardware configuration of an embodiment of an endoscopic image processing server provided in the present specification.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments in the present specification, and not all of the embodiments. All other embodiments that can be obtained by a person skilled in the art on the basis of one or more embodiments of the present description without inventive step shall fall within the scope of protection of the embodiments of the present description.
The following describes an embodiment of the present disclosure with a specific application scenario as an example. Specifically, fig. 1 is a schematic flowchart of an embodiment of a method for processing an endoscopic image provided in this specification. Although the present specification provides the method steps or apparatus structures as shown in the following examples or figures, more or less steps or modules may be included in the method or apparatus structures based on conventional or non-inventive efforts.
An embodiment provided in the present specification can be applied to an image processing apparatus (such as a processor supporting image data processing or an image processing host). The image processing apparatus may be provided in a client, a server, an image platform, and the like. The client may include a terminal device, such as a smart phone, a tablet computer, and the like. The server may include a single computer device, or may include a server cluster formed by a plurality of servers, or a server structure of a distributed system, and the like.
It should be noted that, in the technical solution of the present application, the acquisition, storage, use, processing, etc. of data all conform to the relevant regulations of the national laws and regulations. The following description of the embodiments does not limit the technical solutions in other extensible application scenarios based on the present description. In an embodiment of a method for processing an endoscopic image, as shown in fig. 1, the method may include the following steps.
S101: acquiring an image signal acquired by an endoscope; the image signal includes a plurality of frame pictures.
In the laparoscopic surgery, the endoscope can provide visual information in an abdominal cavity for a doctor on one hand, and can be operated by the doctor and in a motion state of moving and stopping at any time along with the operation of the doctor on the other hand.
In some embodiments, the endoscope may specifically include a rigid endoscope, a flexible endoscope, and the like. The endoscope has an image capturing part such as a camera (e.g., a binocular camera), an image sensor (e.g., a CMOS image sensor), and the like. The image acquisition component may be used to acquire image signals in a surgical environment. An image signal is also understood to be an image video signal, which may comprise a plurality of frame pictures. One frame picture can be understood as one frame image. A plurality of frame pictures in the image signal are arranged according to the acquisition sequence. As shown in fig. 2, for the schematic diagram of the endoscope provided by the present specification to acquire image signals, specifically, light emitted from a cold light source device first passes through the endoscope body to illuminate a surgical site, and then image signals during a surgical procedure are captured by a CMOS image sensor mounted on an endoscope tube of the endoscope.
In some embodiments, the image processing device may acquire image signals acquired by an endoscope. For example, in some implementations, the endoscope may be connected to an image processing device, so that the endoscope can transmit the image signal to the image processing device for processing after capturing the image signal during the operation through a CMOS image sensor mounted on the endoscope tube. It is to be understood that the above description is only exemplary, and the way of acquiring the image signal by the image processing apparatus is not limited to the above examples, and other modifications are possible for those skilled in the art in light of the technical spirit of the present application, but all that can be achieved by the image processing apparatus are intended to be covered by the scope of the present application as long as the functions and effects achieved by the image processing apparatus are the same as or similar to those of the present application.
S103: performing correlation analysis on the frame picture to determine a target image signal; the target image signal comprises a frame picture needing image stabilization.
In an operation scene using an endoscope, an image signal acquired by the endoscope is blurred due to movement of the endoscope, respiration of a patient, movement of an organ, and the like. Based on this, the embodiments of the present disclosure provide an image stabilization process for an image signal acquired by an endoscope, so as to reduce the blurring problem of a picture.
In an operation scene using the endoscope, a doctor actively controls the endoscope to move rapidly (i.e., rapid movement actively triggered by a user), and the doctor does not have a requirement for viewing a clear image during the process of inserting/taking out the endoscope, so in some embodiments of the present description, after acquiring an image signal acquired by the endoscope, the correlation analysis may be performed on a frame picture in the image signal, and the frame picture that needs image stabilization processing is determined. Therefore, the requirement of a doctor on image stabilization in the process of operating the endoscope can be matched, and the resource allocation of a computer can be optimized. The frame pictures are subjected to correlation analysis, so that whether the endoscope is in the fast motion actively triggered by the user or not can be determined when the corresponding frame pictures are collected. If the fast motion is triggered by the user actively, the frame image collected by the endoscope does not need to be subjected to image stabilization processing, and if the fast motion is not triggered by the user actively, the frame image collected by the endoscope needs to be subjected to image stabilization processing. A user actively triggered fast motion may be understood as a user actively controlling the endoscope to move fast.
In the embodiment of the present specification, it may be determined whether the endoscope is a fast motion actively triggered by a user when acquiring a corresponding frame picture, that is, whether the corresponding frame picture needs to be stabilized, by calculating a similarity between adjacent frame pictures.
Specifically, in some embodiments, performing correlation analysis on the frame picture to determine the target image signal may include: acquiring a first frame picture and a second frame picture from the image signal; the second frame picture is adjacent to the first frame picture and is positioned behind the first frame picture; calculating the similarity between the first frame picture and the second frame picture; determining the second frame of picture as a frame of picture needing image stabilization under the condition that the similarity is smaller than a preset threshold value; and determining a target image signal based on all frame pictures needing image stabilization. The first frame picture and the second frame picture are front and back frame pictures in the image signal. The similarity may also be understood as a degree of association. The preset threshold may be set according to an actual scene, which is not limited in this specification. The frame pictures needing image stabilization are arranged according to the acquisition sequence, and a target image signal can be obtained.
In some embodiments, calculating the similarity between the first frame picture and the second frame picture may include: extracting feature points of each frame of picture by using an SIFT algorithm to obtain feature information corresponding to each frame of picture; and calculating the similarity between the first frame picture and the second frame picture based on the characteristic information corresponding to the first frame picture and the characteristic information corresponding to the second frame picture. Wherein, the feature point of each frame picture can include a plurality. The number of feature points of each frame is the same.
The SIFT, which is Scale-invariant feature transform (Scale-invariant feature transform), is an algorithm in the field of image processing, and may be used to detect and describe local features in an image, and also may be used to detect key points in an image, which is a local feature descriptor. As shown in fig. 3, a schematic flow chart for extracting frame image feature points by using a SIFT algorithm provided in this specification may specifically include the following steps: (1) image signal input: inputting a frame picture in an image signal; (2) and (3) detecting an extreme point: identifying potential extreme points with unchanged scale and rotation in a frame picture through a Gaussian differential function; (3) and (3) extreme point positioning: removing extreme points with low contrast and unstable edge response points, and determining feature points; (4) determining the main direction of the feature points: one or more directions are allocated to each feature point position based on the local gradient direction of the image; (5) generating a feature point description vector: calculating an intra-block gradient histogram by blocking pixels around the feature points to generate feature point description vectors; (6) generating description vectors of all characteristic points of the image: and performing the above operation on all the feature points in the frame picture to generate a feature point description vector corresponding to the frame picture. Wherein the characteristic points are composed of spatial local extreme points. The feature information includes feature point description vectors corresponding to the frame pictures.
It should be noted that, when searching for the extreme point, each pixel point is compared with all its neighboring points, and when it is larger (or smaller) than all its neighboring points of the image domain and the scale domain, it is the extreme point (feature point). As shown in fig. 4, a schematic diagram of extreme point detection in the SIFT algorithm provided in this specification, where the range compared with each pixel point is a cube of 3 × 3, as shown in fig. 4, the middle detection point (the current detection pixel point) is compared with 8 neighboring points of the same scale and 9 × 2 points (26 points in total) corresponding to upper and lower neighboring scales, so as to ensure that the extreme point is detected in both the scale space and the two-dimensional image space.
For example, in some implementation scenarios, after image signals are acquired, feature points of each frame picture can be extracted through a SIFT algorithm to obtain a feature point description vector corresponding to each frame picture, and then similarity of the previous and next frame pictures is calculated based on the feature point description vectors corresponding to the previous and next frame pictures. The similarity includes, but is not limited to, euclidean distance, cosine similarity, etc.
Further, after the similarity of the previous and next frames is obtained, whether the similarity of the previous and next frames is smaller than a preset threshold value or not can be judged. If so, judging that the endoscope actively triggers the rapid motion for the user when the next frame of picture is collected, and at the moment, the next frame of picture does not need to be subjected to image stabilization processing; if not, it can be determined that the endoscope does not actively trigger rapid motion for the user when the next frame of picture is acquired, and at this time, the next frame of picture can be determined as the frame of picture needing image stabilization.
In the embodiment of the specification, the SIFT algorithm is used for extracting the feature points and matching, so that the speed is higher, the precision is higher, and the power consumption is lower.
It is to be understood that the above description is only exemplary, and the way of extracting feature points in frames and the way of calculating the similarity between adjacent frames are not limited to the above examples, and those skilled in the art may make other changes within the spirit of the present application, but all that can be achieved by the present application is covered by the scope of the present application as long as the functions and effects achieved by the present application are the same as or similar to those of the present application.
In some embodiments, a gyroscope may be further disposed on the endoscope for acquiring an angular velocity corresponding to the frame. In this way, in the embodiment of the present specification, it can be determined, through the angular velocity acquired by the gyroscope, whether the endoscope is a fast motion actively triggered by the user when acquiring the corresponding frame picture, that is, whether the corresponding frame picture needs to be stabilized. Wherein each frame corresponds to an angular velocity.
Specifically, in some embodiments, performing correlation analysis on the frame picture to determine the target image signal may include: acquiring an angular velocity corresponding to a first frame of picture in the image signal; wherein the angular velocity is acquired by a gyroscope disposed on the endoscope; determining the first frame picture as a frame picture needing image stabilization under the condition that the angular speed is less than or equal to a preset threshold value; and determining a target image signal based on all frame pictures needing image stabilization. The preset threshold may be set according to an actual scene, which is not limited in this specification. The frame pictures needing image stabilization are arranged according to the acquisition sequence, and a target image signal can be obtained.
For example, in some implementation scenarios, it may be determined whether the angular velocity of the current frame is greater than a preset threshold β according to the output value of the gyroscope, and if so, it may be determined that the endoscope actively triggers a fast motion for the user when the current frame is collected, and at this time, the current frame does not need to be subjected to image stabilization processing; otherwise, judging that the endoscope does not actively trigger the rapid motion for the user when the current frame picture is collected, and determining the current frame picture as the frame picture needing image stabilization.
In the embodiment of the present specification, by performing correlation analysis on frame pictures in image signals, the image signals acquired by an endoscope are divided into a video segment (i.e., a target image signal) that needs image stabilization and a video segment that does not need image stabilization, so that resource allocation of a computer can be optimized, and less resources are occupied for image stabilization.
S105: and carrying out image stabilization processing on the target image signal to obtain an image stabilization processing result.
In this embodiment of the present description, after performing correlation analysis on a frame picture and determining a target image signal, image stabilization processing may be performed on the target image signal to obtain an image stabilization processing result.
In some embodiments, performing image stabilization on the target image signal to obtain an image stabilization result may include: dividing the target image signal into different time sequence segments; carrying out image stabilization processing on the frame pictures in each time sequence section to obtain an image stabilization processing result corresponding to each time sequence section; and splicing the image stabilization processing results corresponding to each time sequence section to obtain the image stabilization processing result corresponding to the target image signal.
In some embodiments, dividing the target image signal into different time series segments may include: acquiring angular velocity corresponding to each frame of picture in the target image signal; determining a reference frame picture in the target image signal according to the relation between the angular speed and a specified threshold value; determining a frame picture from a first reference frame picture to a first frame picture as a first time sequence period; the first frame picture is adjacent to the second reference frame picture and is positioned in front of the second reference frame picture; the second reference frame picture is located behind the first reference frame picture; determining frame pictures from the second reference frame picture to the second frame picture as a second time-series segment; the second frame picture is adjacent to the third reference frame picture and is positioned in front of the third reference frame picture; the third reference frame picture is located after the second reference frame picture.
In some embodiments, determining a reference frame picture in the target image signal according to a relationship of angular velocity to a specified threshold may include: determining a frame picture corresponding to the first angular speed as a first reference frame picture; the first angular velocity represents a first angular velocity in the target image signal that is less than a specified threshold; determining a frame picture corresponding to the second angular velocity as a second reference frame picture; the second angular velocity is smaller than a specified threshold, and a frame picture corresponding to the second angular velocity is positioned behind a frame picture corresponding to the first angular velocity; at least one first key frame picture is included between the frame picture corresponding to the second angular velocity and the frame picture corresponding to the first angular velocity, and the angular velocity corresponding to the first key frame picture is not less than a specified threshold value; between the second reference frame picture and any one first key frame picture, no frame picture with angular velocity smaller than a specified threshold exists; determining a frame picture corresponding to the third angular velocity as a third reference frame picture; the third angular velocity is smaller than a specified threshold value, and a frame picture corresponding to the third angular velocity is positioned behind a frame picture corresponding to the second angular velocity; at least one second key frame picture is included between the frame picture corresponding to the third angular velocity and the frame picture corresponding to the second angular velocity, and the angular velocity corresponding to the second key frame picture is not less than a specified threshold value; and between the third reference frame picture and any one second key frame picture, no frame picture with the angular speed less than a specified threshold exists. The time-series period may also be referred to as a jitter time period or a jitter time series. The reference frame picture may also be referred to as a reference frame. And the angular speeds corresponding to the first key frame picture and the second key frame picture are not less than a specified threshold value. Each time series segment includes a reference frame picture. The designated threshold may be set according to an actual scene, which is not limited in this specification.
Specifically, for example, in some implementation scenarios, an angular velocity corresponding to each frame of picture in the target image signal may be acquired through a gyroscope installed at a handheld end of the endoscope, and then a first frame of picture with an angular velocity smaller than a threshold α is taken as a current reference frame. Further, after the current reference frame is determined, a key frame picture with angular velocity greater than or equal to the threshold α appears for the first time, and the first frame picture with angular velocity less than the threshold α in the key frame picture is taken as the next reference frame. By analogy, the next reference frame can be used as the current reference frame to determine the following reference frame. The threshold α may be set according to an actual scene, which is not limited in this specification.
After determining the reference frame in the target image signal, a frame picture between the current reference frame and a frame preceding the next reference frame may be determined as a time-series segment. It can be understood that the angular velocity of the current frame is smaller than the threshold α, which can indicate that the current frame is not shaken, and if the angular velocity of the next frame is smaller than the threshold α, the current frame and the next frame are considered to be still not shaken, so that the current frame and the next frame can be classified into the same time sequence segment. If a frame picture with an angular velocity smaller than the threshold α appears again after the frame picture with an angular velocity larger than the threshold α appears in the middle, the frame picture with an angular velocity smaller than the threshold α appears again, and the frame picture can be regarded as a reference frame of the next time sequence segment. FIG. 5 is a schematic diagram of the division of a target image signal into time-series segments, wherein Seq is shown for purposes of this specification n Denotes the nth time series segment, R n A reference frame representing an nth time series segment.
Further, after the target image signal is divided into different time sequence segments, image stabilization processing may be performed on the image in each time sequence segment to obtain an image stabilization processing result corresponding to each time sequence segment.
In some embodiments, performing image stabilization on the frame pictures in each time sequence segment to obtain an image stabilization result corresponding to each time sequence segment may include: extracting the characteristic points of each frame of picture in the target time sequence segment to obtain the characteristic information corresponding to each frame of picture; determining an image stabilization processing result corresponding to each frame of picture based on the characteristic information corresponding to each frame of picture; and splicing the image stabilization processing results corresponding to the frames to obtain the image stabilization processing results corresponding to the target time sequence section. The target time-series segment may be any time-series segment. The feature information includes feature point description vectors corresponding to the frame pictures.
In some embodiments, determining the image stabilization processing result corresponding to each frame picture based on the feature information corresponding to each frame picture may include: calculating translation information of the kth frame picture relative to the (k-1) th frame picture according to the characteristic information corresponding to the kth frame picture and the characteristic information corresponding to the (k-1) th frame picture; k is more than or equal to 1; determining translation information of the kth frame picture relative to a reference frame picture based on all translation information obtained before the kth frame picture; translating the kth frame picture based on translation information of the kth frame picture relative to a reference frame picture to obtain a preliminary image stabilization result of the kth frame picture; and processing the preliminary image stabilization result of the kth frame picture by using a preset transformation matrix to obtain an image stabilization processing result corresponding to the kth frame picture. When k is 1, the corresponding frame picture is a reference frame picture. The translation information may also be referred to as a translation vector, which may include a translation magnitude and a translation direction.
For example, in some embodiments, the SIFT algorithm may be used to extract the feature points of each frame picture in the target time sequence segment, and obtain the feature point description vector corresponding to each frame picture. Then, a translation vector of the next frame picture relative to the previous frame picture, that is, a translation vector between the adjacent frame pictures, may be calculated according to the feature point description vectors corresponding to the adjacent frame pictures. Further, the translation vector of the current frame picture relative to the reference frame picture can be obtained based on the translation vectors between all adjacent frame pictures in front of the current frame picture, and then the current frame picture can be translated according to the translation vector of the current frame picture relative to the reference frame picture, so that a preliminary image stabilization result corresponding to the current frame picture is obtained.
A schematic diagram for determining a translation vector between adjacent frame pictures is provided for the present specification, as shown in fig. 6, where (X) k-1 ,Y k-1 ) Representing a feature point description vector corresponding to a (k-1) th frame picture in a target time sequence segment, and recording the feature point description vector as a (k-1) th frame; (X) k ,Y k ) Representing a feature point description vector corresponding to a k frame picture in a target time sequence segment, and recording the feature point description vector as a k frame;
Figure BDA0003579414480000111
indicating the phase of the k-th frame within the target time series segmentTranslation information for the (k-1) th frame. Thus, the translation information of the k-th frame relative to the reference frame picture can be expressed as
Figure BDA0003579414480000112
Wherein the content of the first and second substances,
Figure BDA0003579414480000113
indicating translation information of the ith frame relative to the (i-1) th frame within the target time series segment, k indicating the kth frame, and i indicating the ith frame. Further, the kth frame picture may be translated based on translation information of the kth frame with respect to the reference frame picture, the translation vector being
Figure BDA0003579414480000114
Thereby obtaining the result after the preliminary image stabilization of the kth frame picture in the target time sequence section. The obtained result after the preliminary image stabilization of the kth frame picture in the target time sequence segment can be understood as a result after motion compensation is performed on the kth frame picture.
In some implementation scenarios, after obtaining the result after the preliminary image stabilization of the kth frame picture in the target time sequence segment, the preliminary image stabilization result of the kth frame picture may be reversely compensated by using a preset transformation matrix, so as to obtain an image stabilization processing result corresponding to the kth frame picture. Wherein, the inverse compensation can be understood as performing a one-to-one matrix transformation. The preset transformation matrices corresponding to each frame may be the same or different, and this specification does not limit this.
Similarly, based on the same manner as above, the result after preliminary image stabilization of each frame picture in the target time sequence segment and the corresponding image stabilization processing result can be obtained.
Specifically, as shown in fig. 7, a schematic diagram for performing motion compensation on each frame picture in a target time sequence segment provided in this specification is provided, where an abscissa represents a sequence number (frame number) of the frame picture, an ordinate represents relative displacement, a broken line represents a graph corresponding to the target time sequence segment, and a curve represents a graph after motion compensation is performed on each frame picture in the target time sequence segment, that is, a result after preliminary image stabilization of each frame picture in the target time sequence segment.
As shown in fig. 8, a schematic diagram for performing reverse compensation on a result after preliminary image stabilization provided in this specification is provided, where an original sequence represents a result after preliminary image stabilization of each frame picture in a target time sequence segment, and transformation matrices T1, T2, and T3 may be the same or different, and an output sequence represents a result after matrix transformation is performed on a result after preliminary image stabilization of each frame picture in the target time sequence segment, that is, an image stabilization processing result corresponding to each frame picture in the target time sequence segment.
In some implementation scenarios, after obtaining the image stabilization processing result corresponding to each frame picture in the target time sequence segment, the image stabilization processing results corresponding to each frame picture may be spliced to obtain the image stabilization processing result corresponding to the target time sequence segment.
Similarly, based on the same manner as described above, the image stabilization processing result corresponding to each time series segment can be obtained. Further, the image stabilization processing results corresponding to each time sequence section are spliced, so that the image stabilization processing result corresponding to the target image signal can be obtained.
In some embodiments, after obtaining the image stabilization processing result corresponding to the target time sequence segment, the method may further include: acquiring a first frame picture and a second frame picture from the image stabilization processing result; the second frame picture is adjacent to the first frame picture and is positioned behind the first frame picture; calculating the similarity between the first frame picture and the second frame picture; and replacing the second frame picture with the reference frame picture in the target time sequence segment under the condition that the similarity is smaller than a preset threshold value. The preset threshold may be set according to an actual scene, which is not limited in this specification. Specifically, for example, in some implementation scenarios, after performing reverse compensation and obtaining an image stabilization processing result corresponding to a target time sequence segment, some discontinuous frame pictures may appear in the image stabilization processing result, and at this time, the discontinuous frame pictures may be replaced by reference frame pictures in the target time sequence segment, so that the finally obtained image stabilization processing result is more accurate.
In some embodiments, each frame picture may correspond to a time stamp, which may be used to identify the position of the frame picture in the image signal. Thus, after obtaining the image stabilization processing result, the method may further include: replacing the frame picture of the corresponding position in the image signal by using the image stabilization processing result based on the time identification to obtain an endoscope image after image stabilization processing; and outputting the endoscope image after image stabilization processing. The time identifier may be a corresponding time when each frame picture is acquired. For example, the endoscope may add a time stamp to each frame picture when acquiring the image signal.
For example, in some implementation scenarios, the image processing apparatus may output the acquired image signal to the image buffering apparatus first because of a delay time difference between the video segment requiring image stabilization processing and the video segment not requiring image stabilization processing. Therefore, after the image stabilization processing result corresponding to the video segment needing image stabilization processing is obtained, the video segment needing image stabilization processing in the image caching device can be inserted and replaced based on the time identification sequence, and the endoscope image after image stabilization processing is output. FIG. 9 is a schematic diagram of an endoscopic image after image stabilization provided in the present specification, wherein (P) k+n ,…,P k+2 ,P k+1 ,P k ,P k-1 ,P k-2 ,P k-3 ,…,P k-n ) Image signal acquired for endoscope, (P) k+1 ,P k ,P k-1 ) Is a frame picture (i.e. target image signal) needing image stabilization processing, (P' k+1 ,P′ k ,P′ k-1 ) As a result of image stabilization of the target image signal, (P) k+n ,…,P k+2 ,P′ k+1 ,P′ k ,P′ k-1 ,P k-2 ,P k-3 ,…,P k-n ) The endoscope image after image stabilization processing. Specifically, the image processing apparatus may acquire (P) k+n ,…,P k+2 ,P k+1 ,P k ,P k-1 ,P k-2 ,P k-3 ,…,P k-n ) And outputs it to the image buffer device first. Then, the image processing apparatus pair (P) k+n ,…,P k+2 ,P k+1 ,P k ,P k-1 ,P k-2 ,P k-3 ,…,P k-n ) Frame picture (P) requiring image stabilization k+1 ,P k ,P k-1 ) Performing image stabilization treatment to obtain (P' k+1 ,P′ k ,P′ k-1 ). Further, chronologically converting (P' k+1 ,P′ k ,P′ k-1 ) Inserting the frame picture replacing the corresponding position in the image buffer device, and outputting (P) k+n ,…,P k+2 ,P′ k+1 ,P′ k ,P′ k-1 ,P k-2 ,P k-3 ,…,P k-n ). The buffering time T may be dynamically adjusted according to the image stabilization processing time, which is not limited in this specification.
In some embodiments, after obtaining the image stabilization processed endoscope image, the image processing device may output the image stabilization processed endoscope image to the image display device, so that the medical staff performs the surgical operation based on the display result, thereby improving the surgical accuracy.
The embodiment of the specification can effectively meet the requirement of a doctor on image stabilization in the process of operating the endoscope, and the problem of unclear images such as blurring is reduced, so that the precision of the doctor in performing operation is improved.
From the above description, it can be seen that, in the embodiments of the present specification, by acquiring an image signal including a plurality of frame pictures collected by an endoscope and performing correlation analysis on the frame pictures, the image signal collected by the endoscope can be divided into a video segment requiring image stabilization processing and a video segment not requiring image stabilization processing, so that resource allocation of a computer can be optimized, and resource occupation of the image stabilization processing is reduced. After image signals collected by the endoscope are divided into a video section which needs image stabilization and a video section which does not need image stabilization, the problem that in the prior art, aiming at sudden jitter, the requirement of a doctor on image stabilization in the process of operating the endoscope cannot be effectively matched and the adaptability is not strong can be solved by carrying out image stabilization on target image signals.
It is to be understood that the foregoing is only exemplary, and the embodiments of the present disclosure are not limited to the above examples, and other modifications may be made by those skilled in the art within the spirit of the present disclosure, and the scope of the present disclosure is intended to be covered by the claims as long as the functions and effects achieved by the embodiments are the same as or similar to the present disclosure. It should be noted that the references to "first", "second", etc. are used for descriptive purposes and to distinguish similar objects, and that no order exists between the two, and no indication or suggestion of relative importance should be understood.
In the present specification, each embodiment of the method is described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments. Reference is made to the description of the method embodiments.
Based on the method for processing the endoscope image, one or more embodiments of the present specification further provide an apparatus for processing the endoscope image. The apparatus may include systems (including distributed systems), software (applications), modules, components, servers, clients, etc. that use the methods described in the embodiments of the present specification in conjunction with any necessary apparatus to implement the hardware. Based on the same innovative conception, embodiments of the present specification provide an apparatus as described in the following embodiments. Because the implementation scheme of the apparatus for solving the problem is similar to that of the method, the specific apparatus implementation in the embodiment of the present description may refer to the implementation of the foregoing method, and repeated details are not described again. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Specifically, fig. 10 is a schematic block diagram of an embodiment of an endoscopic image processing apparatus provided in this specification, and as shown in fig. 10, the endoscopic image processing apparatus provided in this specification may include: an acquisition module 1010, an analysis module 1012, and a processing module 1014.
An acquisition module 1010, which can be used for acquiring image signals collected by an endoscope; the image signal comprises a plurality of frame pictures;
an analysis module 1012, configured to perform correlation analysis on the frame picture to determine a target image signal; the target image signal comprises a frame picture needing image stabilization;
the processing module 1014 may be configured to perform image stabilization processing on the target image signal to obtain an image stabilization processing result.
It should be noted that the above-mentioned description of the apparatus according to the method embodiment may also include other embodiments, and specific implementation manners may refer to the description of the related method embodiment, which is not described herein again.
The present specification also provides an image platform. Fig. 11 is a schematic diagram of an image platform provided in this specification. The image platform includes an endoscope 1110 and an image processing device 1112, the endoscope 1110 can be used for collecting image signals, and the image processing device 1112 can process the image signals collected by the endoscope by using any one of the above-mentioned endoscope image processing methods. The image platform faces to one side of a medical staff (or user) who performs the operation.
In particular, the endoscope 1110 may be inserted into a particular surgical environment and capture image signals during a surgical procedure based on a CMOS image sensor mounted to the endoscope barrel. Further, the image processing device 1112 may process the image signal acquired by the endoscope by using the processing method of the endoscope image provided in the embodiment of the present specification. For example, image signals acquired by an endoscope may be acquired; the image signal comprises a plurality of frame pictures; performing correlation analysis on the frame picture to determine a target image signal; the target image signal comprises a frame picture needing image stabilization; and carrying out image stabilization processing on the target image signal to obtain an image stabilization processing result.
In some embodiments, an image display device 1114 (e.g., a display screen, etc.) may also be included in the image platform. In this way, after obtaining the endoscope image after the image stabilization processing, the image processing apparatus may output the endoscope image after the image stabilization processing to the image display apparatus 1114, so that the medical staff performs the surgical operation based on the display result, thereby improving the surgical accuracy. The image display device 1114 may specifically include a two-dimensional display screen, a three-dimensional display screen, or the like.
In some embodiments, the image platform may be connected to a doctor console (or a doctor's trolley), as shown in fig. 12, which is a schematic view of a doctor's console provided herein. The doctor console may include at least a monitor (e.g., a stereo monitor). Accordingly, the image platform can transmit the endoscope image after image stabilization processing to the doctor console. Further, the doctor console may present the endoscope image after image stabilization to the medical staff through a monitor. Of course, the doctor console may be further configured with VR devices (such as VR glasses, etc.) for displaying the endoscope images after image stabilization processing, so that medical staff can better observe organs, surgical instruments, etc. during the operation.
In some embodiments, the physician's console can be connected to a surgical platform, as shown in fig. 13, which is a schematic illustration of a surgical platform provided herein. The operation platform at least comprises a mechanical arm, a mechanical arm control device and other components. Specifically, medical personnel can accurately send an operation instruction to the operation platform through the doctor console according to an endoscope image which is displayed by a monitor in the doctor console and subjected to image stabilization processing. Further, the mechanical arm control device of the surgical platform can respond to the received operation instruction and control the mechanical arm to execute corresponding actions so as to complete specific surgical operations.
It is to be understood that the foregoing is only exemplary, and the embodiments of the present disclosure are not limited to the above examples, and other modifications may be made by those skilled in the art within the spirit of the present disclosure, and the scope of the present disclosure is intended to be covered by the claims as long as the functions and effects achieved by the embodiments are the same as or similar to the present disclosure.
Embodiments of the present specification further provide a computer device, including a processor and a memory for storing processor-executable instructions, where the processor, when implemented, may perform the following steps according to the instructions: acquiring an image signal acquired by an endoscope; the image signal comprises a plurality of frame pictures; performing correlation analysis on the frame picture to determine a target image signal; the target image signal comprises a frame picture needing image stabilization; and carrying out image stabilization processing on the target image signal to obtain an image stabilization processing result.
It should be noted that the description of the image platform and the computer device according to the method embodiment may also include other embodiments. The specific implementation manner may refer to the description of the related method embodiment, and is not described in detail herein.
The method embodiments provided in the present specification may be executed in a mobile terminal, a computer terminal, a server or a similar computing device. Taking an example of the server running on a server, fig. 14 is a block diagram of a hardware structure of an embodiment of a processing server for an endoscopic image provided in this specification, and the server may be a processing device for an endoscopic image in the above embodiment. As shown in fig. 14, the server 10 may include one or more (only one shown) processors 120 (the processors 120 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), a memory 140 for storing data, and a transmission module 300 for communication functions. It will be understood by those skilled in the art that the structure shown in fig. 14 is only an illustration and is not intended to limit the structure of the electronic device. For example, the server 10 may also include more or fewer components than shown in FIG. 14, and may also include other processing hardware, such as a database or multi-level cache, a GPU, or have a different configuration than shown in FIG. 14, for example.
The memory 140 may be used to store software programs and modules of application software, such as program instructions/modules corresponding to the processing method of the endoscopic image in the embodiment of the present specification, and the processor 120 executes various functional applications and data processing by executing the software programs and modules stored in the memory 140. Memory 140 may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 140 may further include memory located remotely from processor 120, which may be connected to a computer terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission module 300 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal. In one example, the transmission module 300 includes a Network adapter (NIC) that can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission module 300 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The method or apparatus provided by the present specification and described in the foregoing embodiments may implement service logic through a computer program and record the service logic on a storage medium, where the storage medium may be read and executed by a computer, so as to implement the effect of the solution described in the embodiments of the present specification. The storage medium may include a physical device for storing information, and typically, the information is digitized and then stored using an electrical, magnetic, or optical media. The storage medium may include: devices that store information using electrical energy, such as various types of memory, e.g., RAM, ROM, etc.; devices that store information using magnetic energy, such as hard disks, floppy disks, tapes, core memories, bubble memories, and usb disks; devices that store information optically, such as CDs or DVDs. Of course, there are other ways of storing media that can be read, such as quantum memory, graphene memory, and so forth.
The embodiment of the method or the device for processing the endoscope image provided by the present specification can be implemented in a computer by a processor executing corresponding program instructions, for example, implemented in a PC end using a c + + language of a windows operating system, implemented in a linux system, or implemented in an intelligent terminal using, for example, android and iOS system programming languages, implemented in processing logic based on a quantum computer, and the like.
It should be noted that descriptions of the apparatus, the computer device, and the image platform described above according to the related method embodiments may also include other embodiments, and specific implementations may refer to descriptions of corresponding method embodiments, which are not described in detail herein.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the hardware + program class embodiment, since it is substantially similar to the method embodiment, the description is simple, and the relevant points can be referred to the partial description of the method embodiment.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, when implementing one or more of the present description, the functions of some modules may be implemented in one or more software and/or hardware, or the modules implementing the same functions may be implemented by a plurality of sub-modules or sub-units, etc.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, computer devices, image platforms according to embodiments of the invention. It will be understood that the implementation can be by computer program instructions which can be provided to a processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
As will be appreciated by one skilled in the art, one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects.
The above description is merely exemplary of one or more embodiments of the present disclosure and is not intended to limit the scope of one or more embodiments of the present disclosure. Various modifications and alterations to one or more embodiments described herein will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims.

Claims (14)

1. A method of processing an endoscopic image, the method comprising:
acquiring an image signal acquired by an endoscope; the image signal comprises a plurality of frame pictures;
performing correlation analysis on the frame picture to determine a target image signal; the target image signal comprises a frame picture needing image stabilization;
and carrying out image stabilization processing on the target image signal to obtain an image stabilization processing result.
2. The method for processing an endoscopic image according to claim 1, wherein the determining a target image signal by performing correlation analysis on the frame picture comprises:
acquiring a first frame picture and a second frame picture from the image signal; the second frame picture is adjacent to the first frame picture and is positioned behind the first frame picture;
calculating the similarity between the first frame picture and the second frame picture;
determining the second frame of picture as a frame of picture needing image stabilization under the condition that the similarity is smaller than a preset threshold value;
and determining a target image signal based on all frame pictures needing image stabilization.
3. The method for processing an endoscopic image according to claim 2, wherein calculating the similarity between the first frame picture and the second frame picture comprises:
extracting feature points of each frame of picture by using an SIFT algorithm to obtain feature information corresponding to each frame of picture;
and calculating the similarity between the first frame picture and the second frame picture based on the characteristic information corresponding to the first frame picture and the characteristic information corresponding to the second frame picture.
4. The method for processing an endoscopic image according to claim 1, wherein the determining a target image signal by performing correlation analysis on the frame picture comprises:
acquiring an angular velocity corresponding to a first frame of picture in the image signal; wherein the angular velocity is acquired by a gyroscope disposed on the endoscope;
determining the first frame picture as a frame picture needing image stabilization under the condition that the angular speed is less than or equal to a preset threshold value;
and determining a target image signal based on all frame pictures needing image stabilization.
5. The method for processing an endoscopic image according to claim 1, wherein the image stabilization processing of the target image signal to obtain an image stabilization processing result includes:
dividing the target image signal into different time sequence segments;
carrying out image stabilization processing on the frame pictures in each time sequence section to obtain an image stabilization processing result corresponding to each time sequence section;
and splicing the image stabilization processing results corresponding to each time sequence section to obtain the image stabilization processing result corresponding to the target image signal.
6. The method for processing an endoscopic image according to claim 5, wherein dividing said target image signal into different time-series segments comprises:
acquiring angular velocity corresponding to each frame of picture in the target image signal;
determining a reference frame picture in the target image signal according to the relation between the angular speed and a specified threshold value;
determining a frame picture from a first reference frame picture to a first frame picture as a first time sequence period; the first frame picture is adjacent to the second reference frame picture and is positioned in front of the second reference frame picture; the second reference frame picture is located behind the first reference frame picture;
determining frame pictures from the second reference frame picture to the second frame picture as a second time-series segment; the second frame picture is adjacent to the third reference frame picture and is positioned in front of the third reference frame picture; the third reference frame picture is located after the second reference frame picture.
7. The method for processing an endoscopic image according to claim 6, wherein determining a reference frame picture in said target image signal based on a relationship between an angular velocity and a specified threshold value comprises:
determining a frame picture corresponding to the first angular speed as a first reference frame picture; the first angular velocity represents a first angular velocity in the target image signal that is less than a specified threshold;
determining a frame picture corresponding to the second angular velocity as a second reference frame picture; the second angular velocity is smaller than a specified threshold, and a frame picture corresponding to the second angular velocity is positioned behind a frame picture corresponding to the first angular velocity; at least one first key frame picture is included between the frame picture corresponding to the second angular velocity and the frame picture corresponding to the first angular velocity, and the angular velocity corresponding to the first key frame picture is not less than a specified threshold value; between the second reference frame picture and any one first key frame picture, no frame picture with angular velocity smaller than a specified threshold exists;
determining a frame picture corresponding to the third angular velocity as a third reference frame picture; the third angular velocity is smaller than a specified threshold value, and a frame picture corresponding to the third angular velocity is positioned behind a frame picture corresponding to the second angular velocity; at least one second key frame picture is included between the frame picture corresponding to the third angular velocity and the frame picture corresponding to the second angular velocity, and the angular velocity corresponding to the second key frame picture is not less than a specified threshold value; and between the third reference frame picture and any one second key frame picture, no frame picture with the angular speed less than a specified threshold exists.
8. The method for processing an endoscopic image according to claim 5, wherein said performing image stabilization processing on the frame picture in each time-series segment to obtain an image stabilization processing result corresponding to each time-series segment includes:
extracting the characteristic points of each frame of picture in the target time sequence segment to obtain the characteristic information corresponding to each frame of picture;
determining an image stabilization processing result corresponding to each frame of picture based on the characteristic information corresponding to each frame of picture;
and splicing the image stabilization processing results corresponding to the frames to obtain the image stabilization processing results corresponding to the target time sequence section.
9. The method for processing an endoscopic image according to claim 8, wherein determining the image stabilization processing result corresponding to each frame screen based on the feature information corresponding to each frame screen comprises:
calculating translation information of the kth frame picture relative to the (k-1) th frame picture according to the characteristic information corresponding to the kth frame picture and the characteristic information corresponding to the (k-1) th frame picture; k is more than or equal to 1;
determining translation information of the kth frame picture relative to a reference frame picture based on all translation information obtained before the kth frame picture;
translating the kth frame picture based on translation information of the kth frame picture relative to a reference frame picture to obtain a preliminary image stabilization result of the kth frame picture;
and processing the preliminary image stabilization result of the kth frame picture by using a preset transformation matrix to obtain an image stabilization processing result corresponding to the kth frame picture.
10. The method for processing an endoscopic image according to claim 8, after obtaining an image stabilization processing result corresponding to the target time-series segment, further comprising:
acquiring a first frame picture and a second frame picture from the image stabilization processing result; the second frame picture is adjacent to the first frame picture and is positioned behind the first frame picture;
calculating the similarity between the first frame picture and the second frame picture;
and replacing the second frame picture with the reference frame picture in the target time sequence segment under the condition that the similarity is smaller than a preset threshold value.
11. The method for processing endoscopic images according to claim 1, wherein each frame picture corresponds to a time stamp for marking a position of the frame picture in the image signal;
after the image stabilization processing result is obtained, the method further comprises the following steps:
replacing the frame picture of the corresponding position in the image signal by using the image stabilization processing result based on the time identification to obtain an endoscope image after image stabilization processing;
and outputting the endoscope image after image stabilization processing.
12. An image platform comprising an endoscope and an image processing device, wherein the image processing device is used for processing an image signal acquired by the endoscope by using the processing method of the endoscope image according to any one of claims 1 to 11.
13. A computer device comprising a processor and a memory for storing processor-executable instructions which, when executed by the processor, implement the steps associated with a method of processing endoscopic images as defined in any one of claims 1 to 11.
14. A computer-readable storage medium, having stored thereon computer instructions which, when executed by a processor, carry out the steps associated with the method of processing endoscopic images according to any of claims 1 to 11.
CN202210341026.7A 2022-04-02 2022-04-02 Method for processing endoscope image, image platform, computer device and medium Pending CN114863317A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210341026.7A CN114863317A (en) 2022-04-02 2022-04-02 Method for processing endoscope image, image platform, computer device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210341026.7A CN114863317A (en) 2022-04-02 2022-04-02 Method for processing endoscope image, image platform, computer device and medium

Publications (1)

Publication Number Publication Date
CN114863317A true CN114863317A (en) 2022-08-05

Family

ID=82629494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210341026.7A Pending CN114863317A (en) 2022-04-02 2022-04-02 Method for processing endoscope image, image platform, computer device and medium

Country Status (1)

Country Link
CN (1) CN114863317A (en)

Similar Documents

Publication Publication Date Title
KR102013866B1 (en) Method and apparatus for calculating camera location using surgical video
US10810735B2 (en) Method and apparatus for analyzing medical image
US10984556B2 (en) Method and apparatus for calibrating relative parameters of collector, device and storage medium
CN110599421B (en) Model training method, video fuzzy frame conversion method, device and storage medium
US9704063B2 (en) Method of sampling feature points, image matching method using the same, and image matching apparatus
KR20180105876A (en) Method for tracking image in real time considering both color and shape at the same time and apparatus therefor
CN111311635A (en) Target positioning method, device and system
US9934585B2 (en) Apparatus and method for registering images
CN112734776B (en) Minimally invasive surgical instrument positioning method and system
CN109241898B (en) Method and system for positioning target of endoscopic video and storage medium
CN112786163B (en) Ultrasonic image processing display method, system and storage medium
KR20150021351A (en) Apparatus and method for alignment of images
CN114022547A (en) Endoscope image detection method, device, equipment and storage medium
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
CN114663575A (en) Method, apparatus and computer-readable storage medium for image processing
CN111488779B (en) Video image super-resolution reconstruction method, device, server and storage medium
KR101586071B1 (en) Apparatus for providing marker-less augmented reality service and photographing postion estimating method therefor
CN116823905A (en) Image registration method, electronic device, and computer-readable storage medium
CN114863317A (en) Method for processing endoscope image, image platform, computer device and medium
CN115294493A (en) Visual angle path acquisition method and device, electronic equipment and medium
CN112885435B (en) Method, device and system for determining image target area
KR102050418B1 (en) Apparatus and method for alignment of images
CN113902932A (en) Feature extraction method, visual positioning method and device, medium and electronic equipment
Khajarian et al. Image-based Live Tracking and Registration for AR-Guided Liver Surgery Using Hololens2: A Phantom Study
CN114785948B (en) Endoscope focusing method and device, endoscope image processor and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230609

Address after: Room 101, block B, building 1, No. 1601, Zhangdong Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 201203

Applicant after: Shanghai minimally invasive medical robot (Group) Co.,Ltd.

Address before: 201203 room 207, floor 2, building 1, No. 1601, Zhangdong Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai (actual floor 3)

Applicant before: Shanghai Weimi Medical Instrument Co.,Ltd.