CN116385497A - Custom target tracking method and system for body cavity - Google Patents
Custom target tracking method and system for body cavity Download PDFInfo
- Publication number
- CN116385497A CN116385497A CN202310616715.9A CN202310616715A CN116385497A CN 116385497 A CN116385497 A CN 116385497A CN 202310616715 A CN202310616715 A CN 202310616715A CN 116385497 A CN116385497 A CN 116385497A
- Authority
- CN
- China
- Prior art keywords
- tracking
- frame
- similarity
- area image
- calculating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 239000000725 suspension Substances 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 8
- 238000007405 data analysis Methods 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 5
- 230000000694 effects Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 238000004458 analytical method Methods 0.000 claims description 2
- 230000001131 transforming effect Effects 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 6
- 210000000056 organ Anatomy 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000002324 minimally invasive surgery Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000002357 laparoscopic surgery Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Image Analysis (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The invention discloses a self-defined target tracking method and a self-defined target tracking system for a body cavity, which relate to the technical field of computers, wherein the method comprises the following steps of S1, acquiring an initial picture; s2, selecting a target object; s3, acquiring a tracking area image in the body cavity in real time; s4, analyzing the first similarity and the position degree; s5, judging whether the first similarity and the position degree do not exceed the corresponding preset thresholds, if so, proceeding to S3, otherwise, stopping, calculating the current suspension tracking duration, and proceeding to S6; s6, acquiring a tracking area image, analyzing and calculating a second similarity, judging whether the second similarity exceeds a preset threshold, if so, recovering, and proceeding to S3, otherwise proceeding to S7; s7, judging whether the current tracking suspension time exceeds a preset threshold value, if so, proceeding to S6, otherwise, ending; the system combines the computer vision technology and the cloud synchronization technology, can avoid the problem of 'template drift' in tracking to the greatest extent, and can recover the tracking state of the target from long-distance complete shielding.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and a system for tracking a user-defined target in a body cavity.
Background
Today, the application and research of artificial intelligence in the field of minimally invasive surgery is mainly focused on conventional and general objects such as human organs, surgical instruments, etc. However, specific structures such as a focus and a section of adhesion at certain stages in the operation play a vital role in success and failure of the operation. The doctor of the main knife needs to pay attention to the positions, states and changes of the key structures at any time so as to ensure that the operation is performed with high quality and high efficiency. However, there is currently a lack of research in this particular scenario, and target tracking is a relatively reasonable route in view of the feature that it needs to learn online and adapt to changes in the object.
Currently, the mainstream target tracking is a deep learning method. The deep learning method is mostly pretrained on a specific target in a natural scene, has good effect when applied to a conventional target, but can not be satisfied in the special scene in the body cavity which is not seen.
Disclosure of Invention
The invention aims to solve the problems and designs a self-defined target tracking method and system for a body cavity.
The invention realizes the above purpose through the following technical scheme:
a custom target tracking system for use in a body lumen, comprising:
an acquisition module; the acquisition module is used for acquiring operation videos;
touching the display screen; the touch display screen is used for selecting a target object in the initial picture and displaying a real-time tracking process;
a central processing unit; the central processing unit comprises a data analysis module and a judgment module; the data analysis module is used for calculating and analyzing the first similarity of the tracking area image of the current frame and the initial frame, the position overlapping degree of the tracking area image of the current frame and the image of the previous frame, and the second similarity of the tracking area image of the current frame and the image of the previous frame, the judgment module is used for judging whether the first similarity, the second similarity and the position overlapping degree do not exceed the corresponding preset threshold values, the data signal output end of the acquisition module is connected with the data signal input end of the central processor, and the signal end of the touch display screen is connected with the signal end of the central processor;
a database; the database comprises a tracking queue, and the signal end of the database is connected with the signal end of the central processing unit.
The method for tracking the self-defined target in the body cavity is applied to the self-defined target tracking system for the body cavity, and comprises the following steps:
s1, acquiring an initial picture containing a target object in a body cavity;
s2, displaying an initial picture, selecting a target object in the initial picture, and taking the initial picture as a tracked initial frame;
s3, acquiring a tracking area image in the body cavity in real time;
s4, analyzing the first similarity between the tracking area image of the current frame and the initial frame and the position overlapping degree between the tracking area image of the current frame and the previous frame image;
s5, judging whether the first similarity and the position overlapping degree do not exceed the corresponding preset thresholds, if so, returning to S3 to continue tracking, maintaining a tracking queue, otherwise, stopping tracking, starting to calculate the current suspension tracking duration, and entering S6;
s6, acquiring a tracking area image of the next frame, analyzing the characteristics of the tracking area image of the next frame and the characteristics in the tracking queue to calculate second similarity, judging whether the second similarity exceeds a preset threshold, if so, recovering the tracking state, returning to S3 to continue tracking, and otherwise, entering S7;
and S7, judging whether the current tracking suspension duration exceeds a preset threshold value, if so, taking the next frame as the next frame, returning to S6, and otherwise, ending tracking.
The invention has the beneficial effects that: the system combines a computer vision technology and a cloud synchronization technology, is improved and optimized on the basis of a traditional tracking algorithm and combined with the actual condition of a minimally invasive surgery, can avoid the problem of 'template drift' in tracking to the greatest extent, and can recover the tracking state of a target from long-distance and complete shielding. Meanwhile, the system can play a role in the scenes of key prompt in operation, remote guidance of operation and the like, and through visual assistance, the accuracy of doctor operation is improved, and operation damage is reduced.
Drawings
FIG. 1 is a flow chart of the method for custom target tracking in a body cavity according to the present invention;
FIG. 2 is a flow chart of a tracking suspension determination for a custom target tracking method in a body cavity according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It will be apparent that the described embodiments are some, but not all, embodiments of the invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the present invention, it should be understood that the directions or positional relationships indicated by the terms "upper", "lower", "inner", "outer", "left", "right", etc. are based on the directions or positional relationships shown in the drawings, or the directions or positional relationships conventionally put in place when the inventive product is used, or the directions or positional relationships conventionally understood by those skilled in the art are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific direction, be configured and operated in a specific direction, and therefore should not be construed as limiting the present invention.
Furthermore, the terms "first," "second," and the like, are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
In the description of the present invention, it should also be noted that, unless explicitly specified and limited otherwise, terms such as "disposed," "connected," and the like are to be construed broadly, and for example, "connected" may be either fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
The following describes specific embodiments of the present invention in detail with reference to the drawings.
A custom target tracking system for use in a body lumen, comprising:
an acquisition module; the acquisition module is used for acquiring operation videos;
touching the display screen; the touch display screen is used for selecting a target object in the initial picture and displaying a real-time tracking process;
a central processing unit; the central processing unit comprises a data analysis module and a judgment module; the data analysis module is used for calculating and analyzing the first similarity of the tracking area image of the current frame and the initial frame, the position overlapping degree of the tracking area image of the current frame and the image of the previous frame, and the second similarity of the tracking area image of the current frame and the image of the previous frame, the judgment module is used for judging whether the first similarity, the second similarity and the position overlapping degree do not exceed the corresponding preset threshold values, the data signal output end of the acquisition module is connected with the data signal input end of the central processor, and the signal end of the touch display screen is connected with the signal end of the central processor;
a database; the database comprises a tracking queue, and the signal end of the database is connected with the signal end of the central processing unit.
The central processing unit also comprises a feature analysis module which is used for analyzing the template features of the initial frame and the features of the tracking queue.
As shown in fig. 1 and 2, the method for tracking a customized target in a body cavity is applied to the customized target tracking system for a body cavity, and includes:
s1, acquiring an initial picture containing a target object in a body cavity.
S2, displaying an initial picture, selecting a target object in the initial picture, taking the initial picture as a tracked initial frame, and calculating template characteristics of the initial frame; the template characteristics of the initial frame are calculated specifically as follows:
1) Given characteristics ofAnd template feature->Find response of corresponding correlation graph g (h), expressed as +.>Wherein N is C The number of channels is characteristic, +.>For channel weights, ⋆ represents a circular correlation operation, representing dot product, f d Feature set for channel-by-channel representation, h d Template feature set for channel-by-channel representation, w d For each channel weight, { } represents the second norm of the vector;
2) Calculating each channel separately, the objective function that the optimal correlation filter needs to minimize is expressed asWhere λ is a regularization term to reduce the effect of the weight coefficient, g is the response of the correlation graph;
3) Fourier transforming the objective function, expressed asDiag is a diagonal matrix, ++>Is a complex conjugate operator;
4) The solution form of the template feature h is expressed asWherein diag is a diagonal matrix, < ->And->ʘ for respective complex conjugate operators -1 For element-by-element division>Complex conjugate matrix of response matrix g for correlation diagram, < ->Is f d Is a complex conjugate matrix of (a) and (b).
S3, acquiring a tracking area image in the body cavity in real time.
S4, analyzing the first similarity between the tracking area image of the current frame i and the initial frame and the position overlapping degree between the tracking area image of the current frame i and the image of the previous frame i-1;
the calculating of the first similarity specifically includes:
(1) calculating a dominant color histogram of the tracking area image of the current frame i, and normalizing;
(2) calculating the cosine distance between the dominant color histogram of the normalized tracking area image of the current frame i and the dominant color histogram of the initial frame as a first similarity;
the calculating of the position overlapping degree specifically comprises:
(1) Acquiring the position information of a tracking frame of a target object in a tracking area image of a current frame i and a previous frame i-1, wherein coordinates of two points on the upper left and the lower right of the position information of the tracking area image of the current frame i are respectively expressed as (x) 1i ,y 1i )、(x 2i ,y 2i ) Bits of previous frame i-1The coordinates of the upper left and lower right points of the setup information are expressed as (x) 1(i-1) ,y 1(i-1) )、(x 2(i-1) ,y 2(i-1) );
(2) And calculating the intersection ratio iou as the position overlapping degree by calculating the tracking area image of the current frame i and the position information of the previous frame i-1.
S5, judging whether the first similarity and the position overlapping degree do not exceed the corresponding preset thresholds, if so, returning to S3 to continue tracking, maintaining to form a tracking queue, otherwise, stopping tracking, starting to calculate the current suspension tracking time length, and entering S6;
the maintenance tracking queue specifically includes: the features of the previous n frames are put into a tracking queue, when the tracking is successful, the tracking queue is updated, the features of the current frame i are added, and the features of the frame at the farthest distance in the tracking queue are lost; while tracking is suspended, the tracking queue state is saved.
S6, acquiring a tracking area image of the next frame i+1, analyzing the characteristics of the tracking area image of the next frame i+1 and the characteristics in the tracking queue to calculate second similarity, judging whether the second similarity exceeds a preset threshold, if so, recovering the tracking state, returning to S3 to continue tracking, and otherwise, entering S7; the method specifically comprises the following steps:
s61, calculating a dominant color histogram of the tracking area image of the next frame i+1, and normalizing;
s62, calculating the cosine distance between the dominant color histogram of the normalized tracking area image of the next frame i+1 and the dominant color histogram of each frame in the tracking queue one by one;
s63, selecting the maximum cosine distance as a second similarity;
s64, judging whether the second similarity exceeds a preset threshold, if so, recovering the tracking state, returning to S3 to continue tracking, and otherwise, entering S7.
And S7, judging whether the current tracking suspension duration exceeds a preset threshold value, if so, taking the next frame i+2 as the next frame i+1, namely making i+1= (i+1) +1, returning to S6, and otherwise, ending tracking.
The iou position filtering of the adjacent frame tracking frame is increased by regional feature comparison. And comparing template features, namely using a dominant color histogram of a tracking area as a feature, calculating the dominant color histogram of the area every time one frame is tracked, calculating the dominant color histogram of the initial frame at a cosine distance after normalization, and considering that the difference of the area images is overlarge when the distance exceeds a threshold value, wherein the tracking state becomes stopped, so that error accumulation of the template is avoided to cause error tracking. Meanwhile, the position information of each frame tracking frame is stored, which is represented by xy coordinates of two points, upper left and lower right, and iou (cross ratio) is calculated by the tracking frame position of the current frame and the tracking frame position of the previous frame, and this value reflects the displacement degree of the tracking frame position between adjacent frames. iou is less than the threshold, it indicates that the target displacement within a frame is too large, and it is likely that the template accumulated error is too large, resulting in feature transfer to other objects, and the tracking state also needs to be stopped. The two strategies determine at the same time, form a logical OR relationship, disconnect the tracking state as long as one state is not satisfied, and strictly inhibit the template drift problem.
Under the method, the movement of the lens, the shielding of organs and instruments and the like can lead the target not to be seen in the visual field, the tracking state can be disconnected in time, and the error tracking is avoided. In most cases, after a period of time, as the organ returns, the instrument is removed and the target that we have previously tracked appears in the field of view, the disconnected tracker is not able to continue tracking the target. Under the laparoscopic surgery scene, the object moves while the lens moves, so that the alternate overlapped scenes are quite common, and if the object cannot be retrieved from complete loss, the technology can easily reach the bottleneck, and the practicability is greatly reduced. We solve this problem by maintaining a dynamic feature queue.
Maintaining a dynamic characteristic queue, putting the target characteristics of the previous n frames into the queue in a successful tracking state, updating the queue when the tracking is successful, adding the characteristics of the current frame, and discarding the characteristics of the frame at the farthest distance in the queue; and when the tracking is stopped, the state of the queue is saved, targets tracked in the next frame are compared one by one in the queue, and if the maximum second similarity is greater than a set threshold value, the current target is considered to be the target tracked before, and the tracking state is restored. When the target is not tracked for a given long time, the tracking state can be changed from the suspension state to the disconnection state, and the tracking state of the target is ended.
The technical scheme of the invention is not limited to the specific embodiment, and all technical modifications made according to the technical scheme of the invention fall within the protection scope of the invention.
Claims (7)
1. A custom target tracking system for use in a body cavity, comprising:
an acquisition module; the acquisition module is used for acquiring operation videos;
touching the display screen; the touch display screen is used for selecting a target object in the initial picture and displaying a real-time tracking process;
a central processing unit; the central processing unit comprises a data analysis module and a judgment module; the data analysis module is used for calculating and analyzing the first similarity of the tracking area image of the current frame and the initial frame, the position overlapping degree of the tracking area image of the current frame and the image of the previous frame, and the second similarity of the tracking area image of the current frame and the image of the previous frame, the judgment module is used for judging whether the first similarity, the second similarity and the position overlapping degree do not exceed the corresponding preset threshold values, the data signal output end of the acquisition module is connected with the data signal input end of the central processor, and the signal end of the touch display screen is connected with the signal end of the central processor;
a database; the database comprises a tracking queue, and the signal end of the database is connected with the signal end of the central processing unit.
2. The custom target tracking system for use in a body cavity according to claim 1, wherein the central processor further comprises a feature analysis module for analyzing template features of the initial frame and features of the tracking queue.
3. A method for tracking a customized target in a body cavity, applied to the customized target tracking system for a body cavity according to claim 1 or 2, comprising:
s1, acquiring an initial picture containing a target object in a body cavity;
s2, displaying an initial picture, selecting a target object in the initial picture, and taking the initial picture as a tracked initial frame;
s3, acquiring a tracking area image in the body cavity in real time;
s4, analyzing the first similarity between the tracking area image of the current frame and the initial frame and the position overlapping degree between the tracking area image of the current frame and the previous frame image;
s5, judging whether the first similarity and the position overlapping degree do not exceed the corresponding preset thresholds, if so, returning to S3 to continue tracking, maintaining a tracking queue, otherwise, stopping tracking, starting to calculate the current suspension tracking duration, and entering S6;
s6, acquiring a tracking area image of the next frame, analyzing the characteristics of the tracking area image of the next frame and the characteristics in the tracking queue to calculate second similarity, judging whether the second similarity exceeds a preset threshold, if so, recovering the tracking state, returning to S3 to continue tracking, and otherwise, entering S7;
and S7, judging whether the current tracking suspension duration exceeds a preset threshold value, if so, taking the next frame as the next frame, returning to S6, and otherwise, ending tracking.
4. The method for intra-body cavity custom object tracking according to claim 3, further comprising calculating template features of the initial frame in S2, specifically:
1) Given characteristics ofAnd template feature->Find response of corresponding correlation graph g (h), expressed as +.>Wherein f d For channel-by-channel representationSign set, h d Template feature set for channel-by-channel representation, N C The number of channels is characteristic, +.>Is the channel weight, w d For each channel weight, ⋆ represents a circular correlation operation, & represents a dot product, { } represents the second norm of the vector;
2) Calculating each channel separately, the objective function that the optimal correlation filter needs to minimize is expressed asWhere λ is a regularization term to reduce the effect of the weight coefficient, g is the response of the correlation graph;
3) Fourier transforming the objective function, expressed asDiag is a diagonal matrix, ++>Is h d Complex conjugate matrices of (a);
5. The method for custom target tracking in a body lumen according to claim 3 or 4, wherein in S4:
the calculating of the first similarity specifically includes:
(1) calculating a dominant color histogram of the tracking area image of the current frame, and normalizing;
(2) calculating the cosine distance between the dominant color histogram of the normalized tracking area image of the current frame and the dominant color histogram of the initial frame as a first similarity;
the calculating of the position overlapping degree specifically comprises:
(1) Acquiring the position information of the tracking frame of the target object in the tracking area image of the current frame and the previous frame, wherein the coordinates of the upper left and lower right points of the position information of the tracking area image of the current frame are respectively expressed as (x) 1i ,y 1i )、(x 2i ,y 2i ) The coordinates of the upper left and lower right points of the position information of the previous frame are expressed as (x) 1(i-1) ,y 1(i-1) )、(x 2(i-1) ,y 2(i-1) ) Wherein i represents the current frame;
(2) And calculating the position information of the tracking area image of the current frame and the previous frame, and calculating the merging ratio iou as the position overlapping degree.
6. The method for intra-body cavity custom object tracking according to claim 3 or 4, wherein maintaining the tracking queue in S5 is specifically: the features of the previous n frames are put into a tracking queue, when the tracking is successful, the tracking queue is updated, the features of the current frame are added, and the features of the frame at the farthest distance in the tracking queue are lost; while tracking is suspended, the tracking queue state is saved.
7. The method for intra-body cavity custom object tracking according to claim 6, comprising in S6:
s61, calculating a dominant color histogram of the tracking area image of the next frame, and normalizing;
s62, calculating the cosine distance between the dominant color histogram of the normalized tracking area image of the next frame and the dominant color histogram of each frame in the tracking queue one by one;
s63, selecting the maximum cosine distance as a second similarity;
s64, judging whether the second similarity exceeds a preset threshold, if so, recovering the tracking state, returning to S3 to continue tracking, and otherwise, entering S7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310616715.9A CN116385497B (en) | 2023-05-29 | 2023-05-29 | Custom target tracking method and system for body cavity |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310616715.9A CN116385497B (en) | 2023-05-29 | 2023-05-29 | Custom target tracking method and system for body cavity |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116385497A true CN116385497A (en) | 2023-07-04 |
CN116385497B CN116385497B (en) | 2023-08-22 |
Family
ID=86963694
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310616715.9A Active CN116385497B (en) | 2023-05-29 | 2023-05-29 | Custom target tracking method and system for body cavity |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116385497B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103530894A (en) * | 2013-10-25 | 2014-01-22 | 合肥工业大学 | Video target tracking method based on multi-scale block sparse representation and system thereof |
CN104537692A (en) * | 2014-12-30 | 2015-04-22 | 中国人民解放军国防科学技术大学 | Key point stabilization tracking method based on time-space contextual information assisting |
CN108446585A (en) * | 2018-01-31 | 2018-08-24 | 深圳市阿西莫夫科技有限公司 | Method for tracking target, device, computer equipment and storage medium |
US20190370980A1 (en) * | 2018-05-30 | 2019-12-05 | Chiral Software, Inc. | System and method for real-time detection of objects in motion |
CN111815766A (en) * | 2020-07-28 | 2020-10-23 | 复旦大学附属华山医院 | Processing method and system for reconstructing blood vessel three-dimensional model based on 2D-DSA image |
CN112348851A (en) * | 2020-11-04 | 2021-02-09 | 无锡蓝软智能医疗科技有限公司 | Moving target tracking system and mixed reality operation auxiliary system |
US20210065381A1 (en) * | 2019-08-29 | 2021-03-04 | Boe Technology Group Co., Ltd. | Target tracking method, device, system and non-transitory computer readable medium |
CN112932663A (en) * | 2021-03-02 | 2021-06-11 | 成都与睿创新科技有限公司 | Intelligent auxiliary method and system for improving safety of laparoscopic cholecystectomy |
WO2022171036A1 (en) * | 2021-02-09 | 2022-08-18 | 北京有竹居网络技术有限公司 | Video target tracking method, video target tracking apparatus, storage medium, and electronic device |
CN115953437A (en) * | 2023-02-16 | 2023-04-11 | 湖南大学 | Multi-target real-time tracking method integrating visual light stream feature point tracking and motion trend estimation |
-
2023
- 2023-05-29 CN CN202310616715.9A patent/CN116385497B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103530894A (en) * | 2013-10-25 | 2014-01-22 | 合肥工业大学 | Video target tracking method based on multi-scale block sparse representation and system thereof |
CN104537692A (en) * | 2014-12-30 | 2015-04-22 | 中国人民解放军国防科学技术大学 | Key point stabilization tracking method based on time-space contextual information assisting |
CN108446585A (en) * | 2018-01-31 | 2018-08-24 | 深圳市阿西莫夫科技有限公司 | Method for tracking target, device, computer equipment and storage medium |
US20190370980A1 (en) * | 2018-05-30 | 2019-12-05 | Chiral Software, Inc. | System and method for real-time detection of objects in motion |
US20210065381A1 (en) * | 2019-08-29 | 2021-03-04 | Boe Technology Group Co., Ltd. | Target tracking method, device, system and non-transitory computer readable medium |
CN111815766A (en) * | 2020-07-28 | 2020-10-23 | 复旦大学附属华山医院 | Processing method and system for reconstructing blood vessel three-dimensional model based on 2D-DSA image |
CN112348851A (en) * | 2020-11-04 | 2021-02-09 | 无锡蓝软智能医疗科技有限公司 | Moving target tracking system and mixed reality operation auxiliary system |
WO2022171036A1 (en) * | 2021-02-09 | 2022-08-18 | 北京有竹居网络技术有限公司 | Video target tracking method, video target tracking apparatus, storage medium, and electronic device |
CN112932663A (en) * | 2021-03-02 | 2021-06-11 | 成都与睿创新科技有限公司 | Intelligent auxiliary method and system for improving safety of laparoscopic cholecystectomy |
CN115953437A (en) * | 2023-02-16 | 2023-04-11 | 湖南大学 | Multi-target real-time tracking method integrating visual light stream feature point tracking and motion trend estimation |
Non-Patent Citations (2)
Title |
---|
SHABANINIA E,ET AL: "Codebook appearance representation for vehicle handover across disjoint-view multicameras", 《SCIENTIA IRANICA》, vol. 18, no. 6, pages 1450 - 1459 * |
许新: "超声造影病灶区目标跟踪方法的研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, pages 138 - 819 * |
Also Published As
Publication number | Publication date |
---|---|
CN116385497B (en) | 2023-08-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102013806B1 (en) | Method and apparatus for generating artificial data | |
Rangesh et al. | Driver gaze estimation in the real world: Overcoming the eyeglass challenge | |
WO2020125499A9 (en) | Operation prompting method and glasses | |
CN112836640B (en) | Single-camera multi-target pedestrian tracking method | |
CN111046734A (en) | Multi-modal fusion sight line estimation method based on expansion convolution | |
CN112085760B (en) | Foreground segmentation method for laparoscopic surgery video | |
CN109344714B (en) | Sight estimation method based on key point matching | |
CN112509003B (en) | Method and system for solving target tracking frame drift | |
WO2022156425A1 (en) | Minimally invasive surgery instrument positioning method and system | |
CN113177515A (en) | Eye movement tracking method and system based on image | |
CN110113560A (en) | The method and server of video intelligent linkage | |
CN112507920A (en) | Examination abnormal behavior identification method based on time displacement and attention mechanism | |
CN113947742A (en) | Person trajectory tracking method and device based on face recognition | |
EP4309075A1 (en) | Prediction of structures in surgical data using machine learning | |
CN113813053A (en) | Operation process analysis method based on laparoscope endoscopic image | |
CN112767480A (en) | Monocular vision SLAM positioning method based on deep learning | |
CN113642393A (en) | Attention mechanism-based multi-feature fusion sight line estimation method | |
CN116385497B (en) | Custom target tracking method and system for body cavity | |
CN108537156B (en) | Anti-shielding hand key node tracking method | |
Rodríguez-Moreno et al. | A new approach for video action recognition: Csp-based filtering for video to image transformation | |
US20230410491A1 (en) | Multi-view medical activity recognition systems and methods | |
US20230316545A1 (en) | Surgical task data derivation from surgical video data | |
WO2023019699A1 (en) | High-angle facial recognition method and system based on 3d facial model | |
EP4309142A1 (en) | Adaptive visualization of contextual targets in surgical video | |
CN114202794A (en) | Fatigue detection method and device based on face ppg signal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |