CN113345046B - Movement track recording method, device, medium and computing equipment of operating equipment - Google Patents

Movement track recording method, device, medium and computing equipment of operating equipment Download PDF

Info

Publication number
CN113345046B
CN113345046B CN202110658829.0A CN202110658829A CN113345046B CN 113345046 B CN113345046 B CN 113345046B CN 202110658829 A CN202110658829 A CN 202110658829A CN 113345046 B CN113345046 B CN 113345046B
Authority
CN
China
Prior art keywords
data
target
central point
boundary
dictionary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110658829.0A
Other languages
Chinese (zh)
Other versions
CN113345046A (en
Inventor
张雪培
郭念湘
李子尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuanwei Beijing Biotechnology Co ltd
First Affiliated Hospital of Zhengzhou University
Original Assignee
Xuanwei Beijing Biotechnology Co ltd
First Affiliated Hospital of Zhengzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xuanwei Beijing Biotechnology Co ltd, First Affiliated Hospital of Zhengzhou University filed Critical Xuanwei Beijing Biotechnology Co ltd
Priority to CN202110658829.0A priority Critical patent/CN113345046B/en
Publication of CN113345046A publication Critical patent/CN113345046A/en
Application granted granted Critical
Publication of CN113345046B publication Critical patent/CN113345046B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/203Drawing of straight lines or curves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4023Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a movement track recording method, a movement track recording device, a movement track recording medium and a computing device of an operating device. The method comprises the following steps: acquiring bounding box data marked at a target position in target image data; calculating to obtain first central point data at the target position based on the bounding box data; performing data fitting and interpolation operation on the first central point data to obtain second central point data corresponding to the first central point data at the target position; selecting a reference object from the target identified by the target image data; and updating second central point data corresponding to the target of the operating equipment identified by the target image data based on the second central point data corresponding to the reference object, and calculating to obtain the moving track of the operating equipment based on the updated second central point data corresponding to the target of the operating equipment. The method and the device can calculate and obtain more accurate movement track of the operating equipment based on the updated central point data of the target corresponding to the operating equipment.

Description

Movement track recording method, device, medium and computing equipment of operating equipment
Technical Field
The embodiment of the invention relates to the field of artificial intelligence, in particular to a method, a device, a medium and a computing device for recording a movement track of an operating device.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
In order to perform observation and evaluation on the operation of the doctor during the operation or examination, the movement track of the operation device used by the doctor during the operation or examination is usually recorded. In the prior art, a large number of auxiliary observation hardware devices (such as an ultrasonic probe, an induction coil, an ultrasonic receiving device, a magnetic field generating device, a magnetic field induction device and the like) are used for recording the moving track of the operation device. However, in practice, it is found that a common auxiliary observation hardware device does not conform to the sensory specification in the medical environment, and the operation is complex, so that the acquired positioning data is not accurate enough, and further, the movement track calculated based on the positioning data is not accurate enough.
Disclosure of Invention
In this context, embodiments of the present invention are intended to provide a movement trace recording method, apparatus, medium, and computing device that operate a device.
In a first aspect of embodiments of the present invention, there is provided a movement trajectory recording method of an operating device, including: acquiring bounding box data marked at the target position in target image data; calculating to obtain first central point data at the target position based on the bounding box data; performing data fitting and interpolation operation on the first central point data to obtain second central point data corresponding to the first central point data at the target position; selecting a reference object from the target identified by the target image data; and updating second central point data corresponding to the target of the operating equipment identified by the target image data based on the second central point data corresponding to the reference object, and calculating the movement track of the operating equipment based on the updated second central point data corresponding to the target of the operating equipment.
In an embodiment of this embodiment, performing data fitting and interpolation on the first central point data to obtain second central point data corresponding to the first central point data at the target position includes: performing data fitting on the first central point data to obtain first fitting central point data corresponding to the first central point data; and performing interpolation operation on the first fitting center point data to obtain second center point data corresponding to the first center point data at the target position.
In an embodiment of this embodiment, performing data fitting on the first center point data to obtain first fitted center point data corresponding to the first center point data includes: denoising the first central point data based on a denoising algorithm to obtain denoised first central point data; and performing data fitting on the denoised first central point data to obtain first fitting central point data.
In an embodiment of the present invention, the obtaining of the first fitting central point data includes obtaining the target image data by performing data fitting on the denoised first central point data, where the target image data includes a current time corresponding to the first central point data, and the first central point data includes central point horizontal data and central point longitudinal data, and the obtaining of the first fitting central point data includes: performing data fitting on the central point transverse data in the denoised first central point data based on the current moment to obtain a transverse fitting function corresponding to the central point transverse data; calculating to obtain transverse fitting data based on the current time and the transverse fitting function; performing data fitting on the central point longitudinal data in the denoised first central point data based on the current moment to obtain a longitudinal fitting function corresponding to the central point longitudinal data; calculating to obtain longitudinal fitting data based on the current time and the longitudinal fitting function; and determining the transverse fitting data and the longitudinal fitting data as first fitting center point data corresponding to the first center point data.
In an embodiment of the present invention, interpolating the first fitting center point data to obtain second center point data corresponding to the first center point data at the target position includes: calculating to obtain a transverse smooth spline curve coefficient based on the current moment, the central point transverse data in the first fitting central point data and a preset parameter; calculating to obtain a longitudinal smooth spline curve coefficient based on the current moment, the central point longitudinal data in the first fitting central point data and the preset parameter; selecting a maximum time and a minimum time from current times contained in the target image data; constructing time data based on the maximum time and the minimum time; performing interpolation operation on the transverse data of the central point based on the time data and the transverse smooth spline curve coefficient to obtain transverse data of an interpolated central point; performing interpolation operation on the longitudinal data of the central point based on the time data and the longitudinal smooth spline curve coefficient to obtain the longitudinal data of the interpolated central point; and obtaining second central point data corresponding to the first central point data at the target position based on the transverse data of the interpolation central point and the longitudinal data of the interpolation central point.
In an embodiment of the present invention, updating second center point data corresponding to a target of an operating device identified by the target image data based on the second center point data corresponding to the reference object, and calculating a movement trajectory of the operating device based on the updated second center point data corresponding to the target of the operating device includes: updating second central point data corresponding to targets except the reference object in the target image data based on the second central point data corresponding to the reference object to obtain updated moving central point data corresponding to the targets; and calculating to obtain the movement track of the operating equipment based on the updated movement center point data corresponding to the target of the operating equipment in the target image data.
In an embodiment of the present invention, the target other than the reference object is at least one of the operation device, the acquisition device for acquiring the original image data, and an operation target corresponding to a target position in the target image data.
In an embodiment of the present invention, when the target is an operation device, updating second center point data corresponding to a target other than the reference object in the target image data based on the second center point data corresponding to the reference object to obtain updated moving center point data corresponding to the target includes: acquiring second central point data corresponding to the reference object and second central point data corresponding to the operating equipment from the target image data; and updating second center point data corresponding to the operating equipment based on the second center point data corresponding to the reference object and the second center point data corresponding to the operating equipment to obtain updated moving center point data corresponding to the operating equipment.
In an embodiment of the present invention, when the target is an acquisition device, updating second center point data corresponding to a target other than the reference object in the target image data based on the second center point data corresponding to the reference object to obtain updated moving center point data corresponding to the target, includes: acquiring second central point data corresponding to the reference object and second central point data corresponding to the acquisition equipment from the target image data; acquiring the width and the height of the target image data; and updating the second central point data corresponding to the acquisition equipment based on the second central point data corresponding to the reference object, the second central point data corresponding to the acquisition equipment and the width and height of the target image data to obtain updated moving central point data corresponding to the acquisition equipment.
In an embodiment of the present invention, when the target is an operation target, updating second center point data corresponding to a target other than the reference object in the target image data based on the second center point data corresponding to the reference object to obtain updated moving center point data corresponding to the target includes: acquiring second central point data corresponding to the reference object and second central point data corresponding to the operation target from the target image data; and updating second center point data corresponding to the operation target based on the second center point data corresponding to the reference object and the second center point data corresponding to the operation target to obtain updated moving center point data corresponding to the operation target.
In one embodiment of the present invention, acquiring bounding box data for identifying at a target position in target image data includes: and acquiring bounding box data which is identified at the target position in the target image data based on the type mapping dictionary.
In an embodiment of the present invention, before obtaining bounding box data identified at the target position in target image data based on a type mapping dictionary, the method further includes: performing target detection on original image data of the operating equipment through an example segmentation model to obtain target image data of a rectangular surrounding frame displayed at the target position and/or a polygonal surrounding frame displayed at the target position in the original image data; constructing a first type mapping sub-dictionary and/or a second type mapping sub-dictionary based on traversing a rectangular bounding box displayed at the target position and/or a polygonal bounding box displayed at the target position in the target image data in a time sequence; generating a type mapping dictionary based on the first type mapping sub-dictionary and/or the second type mapping sub-dictionary.
In an embodiment of the present invention, constructing a first type mapping sub-dictionary and/or a second type mapping sub-dictionary based on traversing a rectangular bounding box displayed at the target position and/or a polygonal bounding box displayed at the target position in the target image data in a time-series manner includes: traversing a rectangular surrounding frame displayed at the target position in the target image data based on time sequence, and constructing a first type mapping sub-dictionary containing surrounding frame data of the rectangular surrounding frame; and/or traversing a polygon bounding box displayed at the target position in the target image data based on time sequence, and constructing a second type mapping sub-dictionary comprising bounding box data of the polygon bounding box.
In an embodiment of the present invention, constructing a first type mapping sub-dictionary of bounding box data including a rectangular bounding box based on a time-series traversal of the rectangular bounding box displayed at the target position in the target image data includes: acquiring first detection content data corresponding to a rectangular surrounding frame displayed at the target position based on time sequence from the target image data; the first detection content data at least comprises a first target type of a target identified by the rectangular bounding box, a first current time and first boundary data; the first boundary data is bounding box data of the rectangular bounding box; traversing the first detection content data based on time sequence to construct a first type mapping sub-dictionary; the first type mapping sub-dictionary at least comprises a first time mapping dictionary, the first time mapping dictionary is in one-to-one correspondence with the first target type, and the first target types corresponding to any two first time mapping dictionaries are different.
In an embodiment of the present invention, constructing the first type mapping sub-dictionary based on traversing the first detection content data in a time series includes: traversing the first detection content data based on time sequence to construct a first time mapping dictionary, wherein the first time mapping dictionary comprises at least one time key value pair of first current time and first boundary data, and the first current time and the first boundary data in the time key value pair are obtained from the same first detection content data; and constructing a first type mapping sub-dictionary based on the first detection content data and the first time mapping dictionary, wherein the first type mapping sub-dictionary comprises at least one first target type and a type key value pair of the first time mapping dictionary, and the first target type and a first current time and first boundary data in the first time mapping dictionary belong to the same first detection content data.
In an embodiment of the present invention, constructing a second type mapping sub-dictionary including bounding box data of a polygon bounding box displayed at the target position in the target video data based on a time-series traversal of the polygon bounding box includes: acquiring second detection content data corresponding to the polygon bounding box displayed at the target position based on time sequence from the target image data; the second detection content data at least comprises a second target type of the target identified by the polygon bounding box, a second current time and second boundary data; the second boundary data is bounding box data of the polygonal bounding box; compressing the second boundary data in the second detection content data to obtain compressed second detection content data; calculating to obtain approximate boundary data corresponding to second boundary data based on the second boundary data in the compressed second detection content data; constructing a second type mapping sub-dictionary based on time-series traversal of the second detection content data and the approximate boundary data; the second type mapping sub-dictionary at least comprises a second time mapping dictionary, the second time mapping dictionary is in one-to-one correspondence with the second target type, and the second target types corresponding to any two second time mapping dictionaries are different.
In an embodiment of this embodiment, constructing the second type mapping sub-dictionary based on traversing the second detection content data and the approximate boundary data in a time series includes: traversing the second detection content data and the approximate boundary data based on time sequence to construct a second time mapping dictionary, wherein the second time mapping dictionary comprises at least one time key value pair of a second current time and the approximate boundary data, and the second current time and the approximate boundary data in the time key value pair are obtained from the same second detection content data; and constructing a second type mapping sub-dictionary based on the second detection content data and the second time mapping dictionary, wherein the second type mapping sub-dictionary comprises at least one second target type and a type key value pair of the second time mapping dictionary, and the second target type and second current time and approximate boundary data in the second time mapping dictionary belong to the same second detection content data.
In an embodiment of the present invention, compressing the second boundary data in the second detection content data to obtain compressed second detection content data includes: compressing the second boundary data of the second detection content data to obtain one-dimensional compressed boundary data corresponding to the second boundary data; reducing the one-dimensional compressed boundary data to obtain one-dimensional reduced boundary data corresponding to the one-dimensional compressed boundary data; calculating the one-dimensional reduction boundary data to obtain compressed second boundary data; and updating the second detection content through the compressed second boundary data to obtain compressed second detection content data.
In an embodiment of the present invention, compressing the second boundary data of the second detection content data to obtain one-dimensional compressed boundary data corresponding to the second boundary data includes: acquiring the width and the height of the target image data, and determining a compression step length; calculating to obtain an initialization length based on the width and the height of the target image data and the compression step length; creating one-dimensional compression boundary data corresponding to the second boundary data, wherein the length of the one-dimensional compression boundary data is the initialization length; performing compression calculation on the second boundary data of the second detection content data based on the compression step and the width of the original image data to obtain a first index value corresponding to the second boundary data; and calculating to obtain the compression boundary data corresponding to the first index value in the one-dimensional compression boundary data based on the second boundary data and the first index value.
In an embodiment of the present invention, the performing a reduction process on the one-dimensional compressed boundary data to obtain one-dimensional reduced boundary data corresponding to the one-dimensional compressed boundary data includes: creating one-dimensional reduction boundary data corresponding to the one-dimensional compression boundary data, wherein the length of the one-dimensional reduction boundary data is the product of the width and the height of the original image data; reducing and calculating the one-dimensional compression boundary data based on the compression step length and the width of the target image data to obtain a second index value corresponding to the one-dimensional compression boundary data; and calculating to obtain reduction boundary data corresponding to the second index value in the one-dimensional reduction boundary data based on the one-dimensional compression boundary data and the second index value.
In an embodiment of the present invention, the calculating the one-dimensional reduction boundary data to obtain compressed second boundary data includes: traversing the one-dimensional reduction boundary data based on time sequence to obtain index data, wherein the one-dimensional reduction boundary data corresponding to the index data is larger than a preset value; calculating to obtain compressed transverse moving data and longitudinal moving data based on the index data and the one-dimensional reduction boundary data; and generating compressed second boundary data through the transverse moving data and the longitudinal moving data.
In an embodiment of the present invention, the calculating, based on second boundary data in the compressed second detection content data, approximate boundary data corresponding to the second boundary data includes: acquiring vertex movement data of a target corresponding to each second current moment from second boundary data of the compressed second detection content data; calculating a distance of each of the second boundary data from the vertex movement data; and calculating to obtain approximate boundary data corresponding to the second boundary data based on the second boundary data with the minimum distance.
In an embodiment of the present invention, calculating, based on second boundary data having a minimum distance and corresponding to four vertex movement data corresponding to one object, approximate boundary data corresponding to the second boundary data includes: based on the second boundary data with the minimum distance, four boundary data corresponding to the target in the second boundary data are obtained through calculation; and when the four pieces of boundary data are all different data or three pieces of boundary data are different in the four pieces of boundary data, determining the four pieces of boundary data as approximate boundary data.
In a second aspect of the embodiments of the present invention, there is provided a movement trace recording apparatus of an operation device, including: the acquisition unit is used for acquiring bounding box data which is marked at the target position in the target image data; the calculation unit is used for calculating to obtain first central point data at the target position based on the bounding box data; the operation unit is used for performing data fitting and interpolation operation on the first central point data to obtain second central point data corresponding to the first central point data at the target position; the selecting unit is used for selecting a reference object from the target identified by the target image data; and the updating unit is used for updating second central point data corresponding to the target of the operating equipment identified by the target image data based on the second central point data corresponding to the reference object, and calculating the moving track of the operating equipment based on the updated second central point data corresponding to the target of the operating equipment.
In a third aspect of embodiments of the present invention, there is provided a clinical artificial intelligence assistance system for performing the method of any one of the first aspect.
In a fourth aspect of embodiments of the present invention, there is provided a computer-readable storage medium storing a computer program enabling, when executed by a processor, the method of any one of the first aspect.
In a fifth aspect of embodiments of the present invention, there is provided a computing device comprising the storage medium of the fourth aspect.
According to the movement track recording method, the movement track recording device, the movement track recording medium and the computing equipment of the operating equipment, the bounding box data marked at the target position in the target image data can be acquired; calculating to obtain first central point data at the target position based on the bounding box data; performing data fitting and interpolation operation on the first central point data to obtain second central point data corresponding to the first central point data at the target position; selecting a reference object from the target identified by the target image data; and updating second center point data corresponding to the target of the operating equipment identified by the target image data based on the second center point data corresponding to the reference object, and calculating to obtain the movement track of the operating equipment based on the updated second center point data corresponding to the target of the operating equipment.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
fig. 1 is a schematic flow chart of a method for recording a movement track of an operating device according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a method for recording a movement track of an operating device according to another embodiment of the present invention;
FIG. 3 is a schematic diagram of second center point data of a plurality of targets in target image data generated according to an embodiment of the invention;
FIG. 4 is a graph of the movement trajectories of a plurality of targets in target image data based on the movement trajectory of a reference object generated according to an embodiment of the present invention;
fig. 5 is a schematic flowchart of a method for recording a movement track of an operating device according to another embodiment of the present invention;
FIG. 6 is a diagram illustrating first boundary data of rectangular bounding boxes corresponding to a plurality of objects in original image data output according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating bounding box data of rectangular bounding boxes corresponding to a plurality of objects in raw image data according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating second boundary data of a polygon bounding box corresponding to a plurality of objects in raw image data according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating bounding box data of polygon bounding boxes corresponding to a plurality of objects in raw image data according to an embodiment of the present invention;
FIG. 10 is a schematic interface diagram of a clinical artificial intelligence assistance system according to an embodiment of the present invention;
FIG. 11 is a schematic view of an operator inspection interface output by a clinical artificial intelligence assistance system in accordance with an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a movement track recording apparatus of an operating device according to an embodiment of the present invention;
FIG. 13 is a schematic diagram of a computer-readable storage medium according to an embodiment of the present invention;
fig. 14 schematically shows a structural diagram of a computing device according to an embodiment of the present invention.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely to enable those skilled in the art to better understand and to practice the present invention, and are not intended to limit the scope of the present invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to the embodiment of the invention, a movement track recording method, a movement track recording device, a movement track recording medium and a computing device of an operating device are provided.
In this document, it is to be understood that any number of elements in the figures are provided by way of illustration and not limitation, and any nomenclature is used for differentiation only and not in any limiting sense.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments of the invention.
Exemplary method
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for recording a movement track of an operating device according to an embodiment of the present invention. It should be noted that the embodiments of the present invention can be applied to any applicable scenarios.
Fig. 1 shows a flow of a method for recording a movement track of an operating device according to an embodiment of the present invention, where the method includes:
step S101, acquiring bounding box data marked at the target position in target image data;
step S102, calculating to obtain first central point data at the target position based on the bounding box data;
step S103, performing data fitting and interpolation operation on the first central point data to obtain second central point data corresponding to the first central point data at the target position;
step S104, selecting a reference object from the targets identified by the target image data;
and step S105, updating second central point data corresponding to the target of the operating equipment identified by the target image data based on the second central point data corresponding to the reference object, and calculating to obtain the moving track of the operating equipment based on the updated second central point data corresponding to the target of the operating equipment.
The moving track recording method of the operating device provided by the application is applied to application scenes with limited vision or complex operating environment and the like, the target detection is carried out on the image data collected in the application scenes, and the target boundary recording is carried out on the detected target, wherein the application scenes include but are not limited to operating rooms, inspection rooms, building holes, mechanical inspection scenes and the like.
According to the method and the device, calculation such as data fitting and interpolation operation can be performed on the bounding box data in the target image data to obtain the central point data corresponding to a plurality of different targets in the target image data, then the reference object is selected from the plurality of targets, the central point data of the target corresponding to the operating device is updated based on the central point data of the reference object, and therefore the more accurate moving track of the operating device is calculated and obtained based on the updated central point data of the target corresponding to the operating device.
How to accurately calculate the movement track of the operating device is described below with reference to the accompanying drawings:
in the embodiment of the present invention, the target image data may be a picture or video data collected by an image collecting device (such as a video camera, an endoscope, etc.). The target image data usually includes one or more detected targets, and when a plurality of targets are detected, the target types corresponding to the plurality of targets may be the same target type or may be a plurality of different target types. For example, when the target image data is an intra-cavity image of a patient acquired by an endoscope, it is generally necessary to detect a target such as a lesion region from the intra-cavity image, and one or more targets may be detected from the same target image data.
In the embodiment of the present invention, a Bounding-Box (BBox) for performing identification at a target position may be a rectangular Bounding Box or a polygonal Bounding Box, and based on target image data, coordinate data of the rectangular Bounding Box or the polygonal Bounding Box on the target image data may be determined, where the coordinate data may be determined as Bounding Box data of the Bounding Box for performing identification at the target position. And the first central point data can be the coordinate of the central point of the rectangular surrounding frame or the polygonal surrounding frame, and the size and the shape of the surrounding frame have no influence or little influence on the motion of the researched surrounding frame, so that the whole surrounding frame can be replaced by the central point of one surrounding frame, and the calculation process of the motion of the surrounding frame is simplified.
In the embodiment of the present invention, an object corresponding to one object type may be selected from a plurality of object types included in the object image data as a reference object, for example, when the object image data is an intra-cavity image of a patient acquired by an endoscope, a classification fixed in a background may be used as the reference object (for example, a background capillary vessel in a cavity or a tissue, or an electrocautery tissue after cutting, etc.), and second central point data corresponding to an operation device (for example, a surgical instrument, etc.) or a lens position is converted based on the second central point data of the reference object, so as to obtain a movement trajectory of the object corresponding to the plurality of object types based on the reference object.
Referring to fig. 2, fig. 2 is a schematic flow chart of a method for recording a movement track of an operating device according to another embodiment of the present invention, and the flow chart of the method for recording a movement track of an operating device according to another embodiment of the present invention shown in fig. 2 includes:
step S201, acquiring bounding box data marked at the target position in target image data;
step S202, calculating to obtain first central point data at the target position based on the bounding box data;
step S203, performing data fitting on the first central point data to obtain first fitting central point data corresponding to the first central point data;
step S204, performing interpolation operation on the first fitting center point data to obtain second center point data corresponding to the first center point data at the target position.
By implementing the above steps S203 to S204, fitting and interpolation operations may be performed on the first central point data, so that the obtained second central point data is more accurate and coherent.
In the embodiment of the present invention, polynomial fitting may be performed on the first central point data for N times to obtain the fitted first fitting central point data, and interpolation operation may be performed on the first fitting central point data to obtain the second central point data corresponding to the first central point data. The interpolation operation may be N-time B-spline interpolation, and in addition, the interpolation operation may be implemented on the first fitting center point data by a lagrange interpolation method, a kalman filter, or the like.
As an optional implementation manner, the manner of performing data fitting on the first central point data to obtain first fitting central point data corresponding to the first central point data may include the following steps: denoising the first central point data based on a denoising algorithm to obtain denoised first central point data; and performing data fitting on the denoised first central point data to obtain first fitting central point data. By implementing the implementation mode, denoising operation can be performed on the first central point data, and then data fitting can be performed on the basis of the denoised first central point data, so that the error of the obtained first fitting central point data is small.
In the embodiment of the invention, because the change of the position of the target in the target image data is changed along with the movement of the target in the target image data, it can be seen that the bounding box for identifying the target is discretely generated according to a certain probability, and is expressed in a space-time coordinate system corresponding to the target image data, the output of the bounding box is usually discontinuous, so that the noise of the first central point data of the bounding box generated in a time period corresponding to the target image data is filtered by a specific denoising algorithm, and finally the first central point data of the continuous and smooth bounding box is obtained.
The specific denoising method may be: traversing the current time array t _ arr in the target image data determines a traversal index i, and generates two arrays t _ plus _ arr and t _ min _ arr:
t_plus_arr=[t_arr[i]+1,t_arr[i]+2,...t_arr[i]+n]
t_min_arr=[t_arr[i]-1,t_arr[i]-2,...t_arr[i]-n]
if the data of the bounding box corresponding to any current moment in the array t _ plus _ arr and the array t _ min _ arr does not exceed a set threshold value, the current moment can be judged to be noise, namely if t _ arr [ i ] corresponding to the current traversal index i is noise, first central point data corresponding to t _ arr [ i ] and t _ arr [ i ] are deleted, otherwise, the first central point data corresponding to the traversal index i are not processed; when the traversal of the array t _ plus _ arr and the array t _ min _ arr is completed in the above manner, the array t _ arr at the current time after the traversal is completed can be obtained, that is, the first central point data corresponding to the array t _ arr at the current time after the traversal is completed is the first central point data after the denoising.
Further, the target image data includes a current time corresponding to the first central point data, the first central point data includes central point horizontal data and central point longitudinal data, and the method of performing data fitting on the denoised first central point data to obtain first fitting central point data may include the following steps: performing data fitting on the central point transverse data in the denoised first central point data based on the current moment to obtain a transverse fitting function corresponding to the central point transverse data; calculating to obtain transverse fitting data based on the current time and the transverse fitting function; performing data fitting on the central point longitudinal data in the denoised first central point data based on the current moment to obtain a longitudinal fitting function corresponding to the central point longitudinal data; calculating to obtain longitudinal fitting data based on the current time and the longitudinal fitting function; and determining the transverse fitting data and the longitudinal fitting data as first fitting center point data corresponding to the first center point data. By implementing the implementation mode, data fitting operation can be respectively carried out on the horizontal data of the central point and the vertical data of the central point contained in the first central point data, so that horizontal fitting data and vertical fitting data are obtained, and the horizontal fitting data and the vertical fitting data can be comprehensively determined as the first fitting central point data, so that the first fitting central point data is more accurate.
In the embodiment of the invention, the first central point data can comprise central point transverse data and central point longitudinal data, so that the central point transverse data and the central point longitudinal data can be respectively fitted, and the obtained first fitted central point data is more accurate. Performing polynomial fitting on the horizontal data of the central point for N degrees on the basis of the current moment to obtain a horizontal fitting function corresponding to the horizontal data of the central point, inputting the current moment into the horizontal fitting function, and calculating the current moment through the horizontal fitting function to obtain horizontal fitting data corresponding to the current moment; the longitudinal data of the central point can be subjected to polynomial fitting for the Nth degree on the basis of the current moment to obtain a longitudinal fitting function corresponding to the longitudinal data of the central point, then the current moment is input into the longitudinal fitting function, and the current moment can be calculated through the longitudinal fitting function to obtain longitudinal fitting data corresponding to the current moment; the horizontal fitting data and the vertical fitting data corresponding to the same current moment can be determined as the fitted first fitting center point data corresponding to the current moment.
As an optional implementation manner, a manner of performing an interpolation operation on the first fitting center point data to obtain second center point data corresponding to the first center point data at the target position may include the following steps: calculating to obtain a transverse smooth spline curve coefficient based on the current moment, the central point transverse data in the first fitting central point data and a preset parameter; calculating to obtain a longitudinal smooth spline curve coefficient based on the current moment, the central point longitudinal data in the first fitting central point data and the preset parameter; selecting a maximum time and a minimum time from current times contained in the target image data; constructing time data based on the maximum time and the minimum time; performing interpolation operation on the transverse data of the central point based on the time data and the transverse smooth spline curve coefficient to obtain transverse data of the interpolated central point; performing interpolation operation on the longitudinal data of the central point based on the time data and the longitudinal smooth spline curve coefficient to obtain the longitudinal data of the interpolated central point; and obtaining second central point data corresponding to the first central point data at the target position based on the transverse data of the interpolation central point and the longitudinal data of the interpolation central point. By implementing the implementation mode, the transverse smooth spline curve coefficient and the longitudinal smooth spline curve coefficient of the central point transverse data and the central point longitudinal data contained in the first central point data can be respectively calculated, and then interpolation operation can be respectively carried out based on the transverse smooth spline curve coefficient and the longitudinal smooth spline curve coefficient to obtain the interpolation central point transverse data and the interpolation central point longitudinal data, so that the second central point data generated according to the interpolation central point transverse data and the interpolation central point longitudinal data is more consistent.
In the embodiment of the present invention, interpolation operations may be performed on the horizontal center point data and the vertical center point data included in the first center point data, that is, a preset parameter may be determined first, the preset parameter may be a self-defined parameter of the number of times of a Basis function of a smooth spline curve, the Basis function of the smooth spline curve may be a B-spline Basis function (B-spline Basis Functions), and a linear space may be formed by all the B-spline Basis Functions in an interval corresponding to the current time in the target image data.
In addition, since one or more different types of targets can be identified in the target image data, for multiple different types of targets, a moving trajectory identified by one type of target should be a coherent curve, and multiple curves identified by multiple types of targets are usually non-coincident curves, when performing interpolation operation on the first fitting center point data, it is usually necessary to distinguish the first fitting center point data according to the type of the target, that is, a horizontal smooth spline coefficient corresponding to any type of target can be calculated based on the horizontal data of the center point in the first fitting center point data corresponding to any type of target in the target image data, the current time corresponding to the horizontal data of the center point, and the preset parameter, and can be calculated based on the vertical data of the center point in the first fitting center point data corresponding to any type of target in the target image data And calculating the current time corresponding to the longitudinal data of the central point and preset parameters to obtain the longitudinal smooth spline curve coefficient corresponding to the target of any type.
In the embodiment of the present invention, the transverse smooth spline curve coefficient and the longitudinal smooth spline curve coefficient obtained by the above calculation are respectively corresponding to different target types, so that different targets may have different appearance times in the target image data, and if it is desired to integrate multiple curves corresponding to different target types into the same coordinate system, it is necessary to obtain appearance times and disappearance times of all types of targets, select the shortest time from multiple appearance times and the longest time from multiple disappearance times, and construct time data applicable to curves corresponding to all target types based on the longest time and the shortest time.
Furthermore, a transverse smooth spline curve can be obtained through calculation based on the constructed time data and the transverse smooth spline curve coefficient, a derivative value can be obtained based on the obtained transverse smooth spline curve, and the obtained derivative value is covered on the transverse data of the central point, so that the interpolation operation of the transverse data of the central point is realized, and the transverse data of the interpolation central point corresponding to the transverse data of the central point is obtained; and calculating to obtain a longitudinal smooth spline curve based on the constructed time data and the longitudinal smooth spline curve coefficient, acquiring a derivative value based on the obtained longitudinal smooth spline curve, and covering the obtained derivative value to the central point longitudinal data, thereby realizing the interpolation operation on the central point longitudinal data and obtaining the interpolation central point longitudinal data corresponding to the central point longitudinal data.
In addition, any time in the time data can be acquired, and since one or more types of targets can be included in the target image data, therefore, the arbitrary time can correspond to the interpolation center point horizontal data and the interpolation center point vertical data corresponding to one type of target, and at this time, the interpolated central point horizontal data and the interpolated central point vertical data may be different according to the type of the object, that is, the horizontal data of the interpolation center point and the vertical data of the interpolation center point with the same type of the target can be combined to obtain the second center point data corresponding to the first center point data of the type of the target, wherein, at any time, one type of target in the target image data can only correspond to one target position, thus, the obtained second center point data also corresponds to the first center point data at the target position.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating second center point data of a plurality of targets in the target image data generated according to the embodiment of the invention; the t axis represents time data, and the x axis and the y axis represent movement tracks corresponding to second center point data of different targets in the target image data, the curve a may represent a movement track corresponding to the second center point data of a target, i.e., a yellow cauliflower tumor identified in the target image data, the curve b may represent a movement track corresponding to the second center point data of a target, i.e., an electrotome (surgical instrument) identified in the target image data, and the curve c may represent a movement track corresponding to the second center point data of a target, i.e., a cut tissue identified in the target image data.
Step S205, selecting a reference object from the targets identified by the target image data;
step S206, updating second center point data corresponding to targets except the reference object in the target image data based on the second center point data corresponding to the reference object to obtain updated moving center point data corresponding to the targets;
in an embodiment of the present invention, the target other than the reference object is at least one of the operation device, an acquisition device for acquiring the original image data, and an operation target corresponding to a target position in the target image data. Therefore, the target image data can contain various targets, and then the movement tracks of various targets can be obtained by simultaneous calculation from the target image data, so that the diversity of the movement tracks obtained by calculation is increased.
As can be seen, when the target is an operation device, (1) the method for updating the second center point data corresponding to the target other than the reference object in the target image data based on the second center point data corresponding to the reference object to obtain the updated moving center point data corresponding to the target may include the following steps: acquiring second central point data corresponding to the reference object and second central point data corresponding to the operating equipment from the target image data; and updating second center point data corresponding to the operating equipment based on the second center point data corresponding to the reference object and the second center point data corresponding to the operating equipment to obtain updated moving center point data corresponding to the operating equipment. When the target is the operation device, the second center point data of the operation device can be updated again based on the updated second center point data of the reference object and the second center point data of the operation device, and the movement track of the operation device can be calculated and obtained based on the updated second center point data, so that the accuracy of the movement track of the operation device is improved.
In an embodiment of the present invention, the operation device may be a surgical instrument such as an electric knife, and in practice, the operation device may use a stationary background as a reference object during an operation process, that is, the reference object is stationary, and only the operation device is moving, so that a movement relationship between the operation device and the reference object may be determined according to the reference object, second central point data and a time array t _ arr of the reference object may be obtained from the target image data (the time array t _ arr includes time data t), the second central point data of the reference object may include second central point horizontal axis data bg _ x and second central point vertical axis data bg _ y, the second central point horizontal axis array bg _ x _ arr may be obtained based on the second central point horizontal axis data bg _ x, and the second central point vertical axis array bg _ y may be obtained based on the second central point vertical axis data bg _ y, the second center point data of the operating device may be obtained from the target image data, where the second center point data of the operating device may include second center point horizontal axis data elec _ x and second center point vertical axis data elec _ y, the second center point horizontal axis array elec _ x _ arr of the operating device may be obtained based on the second center point horizontal axis data elec _ x of the operating device, the second center point vertical axis array elec _ y _ arr of the operating device may be obtained based on the second center point horizontal axis data elec _ y _ arr of the operating device, the time array t _ arr may be traversed based on the second center point data of the operating device, the index of the time array t _ arr may be obtained, and the moving center point horizontal axis new _ elec _ x _ arr and the moving center point vertical axis new _ elec _ arr _ y _ arr corresponding to the operating device may be calculated based on the second center point horizontal axis array bg _ x _ arr and the second center point vertical axis array bg _ y _ arr of the reference object A center of motion horizontal axis array new _ elec _ x _ arr containing center of motion horizontal axis data new _ elec _ x, and a center of motion vertical axis array new _ elec _ y _ arr containing center of motion vertical axis data new _ elec _ y;
specifically, the calculation method of the horizontal axis data new _ elec _ x of the moving center point corresponding to the operation device may be:
new_elec_y=elec_y_arr[index]-bg_y_arr[index]
the calculation method of the longitudinal axis data new _ elec _ y of the moving center point corresponding to the operation device may be:
new_elec_x=elec_x=elec_x_arr[index]-bg_x_arr[index]
and obtaining a horizontal axis array new _ elec _ x _ arr of the moving center point corresponding to the operating device based on the obtained horizontal axis data new _ elec _ x of the moving center point corresponding to the operating device, obtaining a vertical axis array new _ elec _ y _ arr of the moving center point corresponding to the operating device based on the obtained vertical axis data new _ elec _ y of the moving center point corresponding to the operating device, and combining the horizontal axis data of the moving center point in the horizontal axis array new _ elec _ x _ arr of the moving center point corresponding to the operating device with the vertical axis data of the moving center point in the horizontal axis array new _ elec _ y _ arr of the moving center point to obtain the data of the moving center point corresponding to the operating device.
(2) When the target is the acquisition device, updating second center point data corresponding to a target other than the reference object in the target image data based on the second center point data corresponding to the reference object, and obtaining updated moving center point data corresponding to the target may include: acquiring second central point data corresponding to the reference object and second central point data corresponding to the acquisition equipment from the target image data; acquiring the width and the height of the target image data; and updating the second central point data corresponding to the acquisition equipment based on the second central point data corresponding to the reference object, the second central point data corresponding to the acquisition equipment and the width and height of the target image data to obtain updated moving central point data corresponding to the acquisition equipment. When the target is the acquisition equipment, the second central point data of the acquisition equipment can be updated again based on the updated second central point data of the reference object, the second central point data of the acquisition equipment and the width and height of the target image data, and the movement track of the acquisition equipment is calculated and obtained based on the second central point data updated again, so that the accuracy of the movement track of the acquisition equipment is improved.
In the embodiment of the invention, in practice, the reference object shot by the acquisition equipment is fixed, the acquisition equipment moves, and the moving direction of the acquisition equipment and the moving direction of the reference object are reciprocal, namely the acquisition equipment moves upwards and the reference object moves downwards; the acquisition equipment moves leftwards, the reference object moves downwards, and so on, the movement relationship between the acquisition equipment and the reference object can be determined, the width w and the height h of the target image data can be acquired, second central point data and a time array t _ arr of the reference object (the time array t _ arr contains time data t) can be acquired from the target image data, the second central point data of the reference object can contain second central point horizontal axis data bg _ x and second central point vertical axis data bg _ y, the second central point horizontal axis array bg _ x _ arr can be acquired based on the second central point horizontal axis data bg _ x, the second central point vertical axis array bg _ y _ arr can be acquired based on the second central point vertical axis data bg _ y, the time array t _ arr can be traversed based on the second central point data of the acquisition equipment, and the index of the time array t _ arr can be acquired, the method comprises the steps that a moving central point horizontal axis array cam _ x _ arr and a moving central point longitudinal axis array cam _ y _ arr which correspond to the acquisition equipment are obtained through calculation based on a second central point horizontal axis array bg _ x _ arr and a second central point longitudinal axis array bg _ y _ arr, wherein the moving central point horizontal axis array cam _ x _ arr contains moving central point horizontal axis data cam _ x, and the moving central point longitudinal axis array cam _ y _ arr contains moving central point longitudinal axis data cam _ y;
specifically, the calculation mode of the horizontal axis data cam _ x of the moving center point corresponding to the acquisition device may be:
cam_x=w-bg_x_arr[index]
the calculation method of the longitudinal axis data cam _ y of the moving center point corresponding to the acquisition device may be:
cam_y=h-bg_y_arr[index]
and obtaining a moving center point horizontal axis array cam _ x _ arr based on the obtained moving center point horizontal axis data cam _ x corresponding to the acquisition equipment, obtaining moving center point vertical axis data cam _ y _ arr based on the obtained moving center point vertical axis data cam _ y corresponding to the acquisition equipment, and combining the moving center point horizontal axis data in the moving center point horizontal axis array cam _ x _ arr corresponding to the acquisition equipment with the moving center point vertical axis data in the moving center point horizontal axis array cam _ y _ arr, so as to obtain moving center point data corresponding to the acquisition equipment.
(3) When the target is an operation target, updating second center point data corresponding to a target other than the reference object in the target image data based on the second center point data corresponding to the reference object, and obtaining updated moving center point data corresponding to the target may include: acquiring second central point data corresponding to the reference object and second central point data corresponding to the operation target from the target image data; and updating second center point data corresponding to the operation target based on the second center point data corresponding to the reference object and the second center point data corresponding to the operation target to obtain updated moving center point data corresponding to the operation target. When the target is the operation target, the second central point data of the operation target can be updated again based on the updated second central point data of the reference object and the second central point data of the operation target, and the movement track of the operation target can be calculated and obtained based on the second central point data which is updated again, so that the accuracy of the movement track of the operation target is improved.
In the embodiment of the present invention, the operation target may be a lesion such as a polyp, in practice, the operation device may use a stationary background as a reference object during the operation process, that is, when the reference object is known, the moving center point data of the operation target of the surgical instrument may be calculated to obtain the moving center point data of the real operation target, it can be seen that the second center point data and the time array t _ arr (the time array t _ arr includes the time data t) of the reference object may be obtained from the target image data, the second center point data of the reference object may include the second center point horizontal axis data bg _ x and the second center point vertical axis data bg _ y, the second center point horizontal axis array bg _ x _ arr may be obtained based on the second center point horizontal axis data bg _ x, and the second center point vertical axis array bg _ arr may be obtained based on the second center point vertical axis data bg _ y, the second central point data of the operation target may further be obtained from the target image data, where the second central point data of the operation target may include second central point horizontal axis data polyp _ x and second central point vertical axis data polyp _ y, the second central point horizontal axis array polyp _ x _ arr of the operation target may be obtained based on the second central point horizontal axis data polyp _ x of the operation target, the second central point vertical axis array polyp _ y _ arr of the operation target may be obtained based on the second central point horizontal axis data polyp _ y _ arr of the operation target, the time array t _ arr may be traversed based on the second central point data of the operation device, the index of the time array t _ arr may be obtained, and the moving central point array horizontal axis _ polyp _ x _ arr and the moving vertical axis array new _ polyp _ y _ arr corresponding to the operation target may also be obtained based on the second central point horizontal axis array bg _ x _ arr and the second central point vertical axis array bg _ y _ arr of the reference object A moving center point horizontal axis array new _ polyp _ x _ arr containing moving center point horizontal axis data new _ polyp _ x, and a moving center point vertical axis array new _ polyp _ y _ arr containing moving center point vertical axis data new _ polyp _ y;
specifically, the calculation method of the moving center horizontal axis data new _ polyp _ x corresponding to the operation target may be:
new_polyp_y=polyp_y_arr[index]-bg_y_arr[index]
the calculation method of the longitudinal axis data new _ elec _ y of the moving center point corresponding to the operation device may be:
new_polyp_x=polyp_x=polyp_x_arr[index]-bg_x_arr[index]
and obtaining a moving center point horizontal axis array new _ polyp _ x _ arr corresponding to the operation target based on the obtained moving center point horizontal axis data new _ polyp _ x corresponding to the operation target, obtaining a moving center point vertical axis array new _ polyp _ y _ arr corresponding to the operation target based on the obtained moving center point vertical axis data new _ polyp _ y corresponding to the operation target, and combining the moving center point horizontal axis data in the moving center point horizontal axis array new _ polyp _ x _ arr corresponding to the operation target with the moving center point vertical axis data in the moving center point horizontal axis array new _ polyp _ y _ arr to obtain moving center point data corresponding to the operation target.
Step S207, calculating a movement trajectory of the operation device based on the movement center point data corresponding to the target of the operation device in the updated target image data.
By performing the above-described steps S206 to S207, the reference object may be selected from the plurality of objects in the object video data, the second center point data of the other object may be updated based on the second center point data of the reference object, and the movement trajectory of the operation device may be calculated from the updated second center point data, so that the obtained movement trajectory of the operation device may be more clearly understood.
Referring to fig. 4, fig. 4 is a diagram illustrating the movement trajectories of a plurality of targets in the target image data based on the movement trajectory of the reference object according to the embodiment of the invention; the t-axis represents time data, the x-axis and the y-axis represent the movement trajectories of a plurality of different targets based on the movement trajectory of the reference object in the target image data, the curve d may represent the movement trajectory of the operation target, i.e., the cut tissue calculated based on the movement trajectory of the reference object, the curve e may represent the movement trajectory of the operation device calculated based on the movement trajectory of the reference object, and the curve f may represent the movement trajectory of the operation device which has performed the interpolation operation calculated based on the movement trajectory of the reference object.
Referring to fig. 5, fig. 5 is a schematic flow chart of a method for recording a movement trace of an operating device according to another embodiment of the present invention, where the flow chart of the method for recording a movement trace of an operating device according to another embodiment of the present invention shown in fig. 5 includes:
step S501, performing target detection on original image data of the operating equipment through an example segmentation model to obtain target image data of a rectangular surrounding frame displayed at the target position and/or a polygonal surrounding frame displayed at the target position in the original image data;
step S502, constructing a first type mapping sub-dictionary and/or a second type mapping sub-dictionary based on a rectangular surrounding frame displayed at the target position and/or a polygonal surrounding frame displayed at the target position in the target image data traversed in a time sequence;
step S503, generating a type mapping dictionary based on the first type mapping sub-dictionary and/or the second type mapping sub-dictionary.
By implementing the above steps S501 to S503, the target detection may be performed on the original image data through the example segmentation model, the rectangular bounding box and/or the polygonal bounding box may be displayed at the detected target position, and the type mapping dictionary may be constructed based on the rectangular bounding box and/or the polygonal bounding box displayed at the target position, so that the obtained information of the type mapping dictionary is more comprehensive.
In an embodiment of the present invention, an Instance Segmentation (Instance Segmentation) model may be constructed through a neural network, and the raw image data may be a picture or video data acquired by an image acquisition device (e.g., a camera, an endoscope, etc.). The original image data usually includes objects to be detected, for example, when the original image data is an intra-cavity image of a patient acquired by an endoscope, objects such as a lesion region are usually detected from the intra-cavity image, and one or more objects can be detected from the same original image data.
As an optional implementation manner, the constructing the first-type mapping sub-dictionary and/or the second-type mapping sub-dictionary based on time-sequentially traversing the rectangular bounding box displayed at the target position and/or the polygonal bounding box displayed at the target position in the target image data may include the following steps: traversing a rectangular bounding box displayed at the target position in the target image data based on time sequence, and constructing a first type mapping sub-dictionary containing bounding box data of the rectangular bounding box; and/or traversing a polygon bounding box displayed at the target position in the target image data based on time sequence, and constructing a second type mapping sub-dictionary containing bounding box data of the polygon bounding box. By implementing the implementation mode, the first type mapping dictionary and the second type mapping sub-dictionary of the rectangular bounding box and the polygonal bounding box can be respectively constructed, so that the type mapping dictionaries corresponding to the bounding boxes with different shapes can be distinguished, and the diversity of the type mapping dictionaries is improved.
In the embodiment of the present invention, the first type mapping dictionary may represent a mapping relationship between a target type of a target identified by a rectangular bounding box, a current time, and bounding box data of the rectangular bounding box, and further may perform associated storage on the bounding box data of the rectangular bounding box and the target type of the target identified by the rectangular bounding box based on the first type mapping dictionary; the second type mapping dictionary may represent a mapping relationship between a target type of a target identified by the polygon bounding box, a current time, and bounding box data of the polygon bounding box, and may further perform associated storage on the bounding box data of the polygon bounding box and the target type of the target identified by the polygon bounding box based on the second type mapping dictionary.
In addition, when only a rectangular bounding box exists in the original image data, the type mapping dictionary is the first type mapping dictionary; when only the polygon bounding box exists in the original image data, the type mapping dictionary is a second type mapping dictionary; when the original image data has both a rectangular bounding box and a polygonal bounding box, the type mapping dictionary is the sum of the first type mapping dictionary and the second type mapping dictionary.
Optionally, the step of constructing the first type mapping sub-dictionary including bounding box data of the rectangular bounding box based on traversing the rectangular bounding box displayed at the target position in the target image data in a time sequence may include the following steps: acquiring first detection content data corresponding to a rectangular surrounding frame displayed at the target position based on time sequence from the target image data; the first detection content data at least comprises a first target type of a target identified by the rectangular bounding box, a first current time and first boundary data; the first boundary data is bounding box data of the rectangular bounding box; traversing the first detection content data based on time sequence to construct a first type mapping sub-dictionary; the first type mapping sub-dictionary at least comprises a first time mapping dictionary, the first time mapping dictionary is in one-to-one correspondence with the first target types, and the first target types corresponding to any two first time mapping dictionaries are different. By implementing the implementation mode, the first type mapping dictionary can be constructed based on the first target type of the target identified by the rectangular bounding box, the first current time and the first boundary data of the rectangular bounding box, so that the comprehensiveness of the data contained in the first type mapping dictionary is ensured.
Further, the method for constructing the first type mapping sub-dictionary based on time-series traversal of the first detection content data may include the following steps: traversing the first detection content data based on time sequence to construct a first time mapping dictionary, wherein the first time mapping dictionary comprises at least one time key value pair of first current time and first boundary data, and the first current time and the first boundary data in the time key value pair are obtained from the same first detection content data; and constructing a first type mapping sub-dictionary based on the first detection content data and the first time mapping dictionary, wherein the first type mapping sub-dictionary comprises at least one type key value pair of a first target type and the first time mapping dictionary, and the first target type and a first current time and first boundary data in the first time mapping dictionary belong to the same first detection content data. When the implementation mode is implemented, the first time mapping dictionary can be constructed based on the first current time and the first boundary data, and the first type mapping dictionary can be constructed based on the first time mapping dictionary and the first target type, so that the data structure in the first type mapping dictionary is clearer.
In the embodiment of the present invention, a first type mapping sub-dictionary, coord _ fact 1, may be created, and a first type mapping sub-dictionary, coord _ fact 1 may store a pair of type key values of a first target type label1 and a first time mapping dictionary, temp _ coord1, that is: a coord _ dit 1[ label1] ═ temp _ coord1, a time key value pair between the first current time t and the first boundary data in the first time mapping dictionary temp _ coord1, where the first boundary data may be calculated by the width w and the height h of the rectangular bounding box and the upper left-hand horizontal axis data x and the vertical axis data y of the rectangular bounding box; the first time mapping dictionary temp _ coord1 may specifically be:
temp_coord1["t"]=[t1,t2,...tn]
temp_coord1["left_top"]=[(x1,y1),(x2,y2),...(xn,yn)]
temp_coord1["right_top"]=[(x1+w1,y1),(x2+w2,y2),...(xn+wn,yn)]
temp_coord1["left_bottom"]=[(x1,y1+h1),(x2,y2+h2),...(xn,yn+hn)]
temp_coord1["right_bottom"]=[(x1+w1,y1+h1),(x2+w2,y2+h2),...(xn+w2,yn+hn)]
where n may be the maximum value of the current time in the target image data.
Referring to fig. 6 and 7 together, fig. 6 is a schematic diagram illustrating first boundary data of rectangular bounding boxes corresponding to a plurality of targets in original image data output according to an embodiment of the present invention; wherein, "●" represents a schematic position diagram of the first boundary data of the rectangular bounding box (i.e. a schematic position diagram of four vertices of the rectangular bounding box) for identifying the target of yellow cauliflower tumor; a schematic position of first boundary data of a rectangular enclosure box (i.e. a schematic position of four vertices of the rectangular enclosure box) which can be used for identifying the electric knife as the target is indicated by a 'tangle-solidup'; "■" indicates a schematic position diagram of first boundary data of a rectangular bounding box (i.e. a schematic position diagram of four vertices of the rectangular bounding box) for identifying the target of post-cut tissue.
FIG. 7 is a diagram illustrating bounding box data of rectangular bounding boxes corresponding to a plurality of objects in raw image data according to an embodiment of the present invention; FIG. 7 is a graph of the first bounding data of the rectangular bounding box corresponding to the yellow cauliflower tumor contained in FIG. 6, concatenated to form a rectangular bounding box identifying the yellow cauliflower tumor; first boundary data of the rectangular surrounding frame corresponding to the electrotome contained in the graph 6 are also connected to form the rectangular surrounding frame for marking the electrotome; the first boundary data of the rectangular bounding box corresponding to the cut tissue contained in fig. 6 is also concatenated to form a rectangular bounding box identifying the cut tissue.
Optionally, the method for constructing the second-type mapping sub-dictionary including bounding box data of the polygon bounding box based on traversing the polygon bounding box displayed at the target position in the target image data in a time sequence may include the following steps: acquiring second detection content data corresponding to a polygon bounding box displayed at the target position based on time sequence from the target image data; the second detection content data at least comprises a second target type of the target identified by the polygon bounding box, a second current time and second boundary data; the second boundary data is bounding box data of the polygon bounding box; compressing the second boundary data in the second detection content data to obtain compressed second detection content data; calculating to obtain approximate boundary data corresponding to second boundary data based on the second boundary data in the compressed second detection content data; constructing a second type mapping sub-dictionary based on time-series traversal of the second detection content data and the approximate boundary data; the second type mapping sub-dictionary at least comprises a second time mapping dictionary, the second time mapping dictionary is in one-to-one correspondence with the second target type, and the second target types corresponding to any two second time mapping dictionaries are different. By implementing the implementation mode, the bounding box data of the polygon bounding box can be compressed to obtain second detection content data, the second detection content data is subjected to approximate operation to obtain approximate boundary data, and a second type mapping dictionary is constructed based on the second detection content data and the approximate boundary data, so that the standard property of the data contained in the second type mapping dictionary is ensured.
Further, the method for constructing the second type mapping sub-dictionary based on time-sequentially traversing the second detected content data and the approximate boundary data may comprise the steps of: traversing the second detection content data and the approximate boundary data based on time sequence to construct a second time mapping dictionary, wherein the second time mapping dictionary comprises at least one time key value pair of a second current time and the approximate boundary data, and the second current time and the approximate boundary data in the time key value pair are obtained from the same second detection content data; and constructing a second type mapping sub-dictionary based on the second detection content data and the second time mapping dictionary, wherein the second type mapping sub-dictionary comprises at least one type key value pair of a second target type and the second time mapping dictionary, and the second target type and second current time and approximate boundary data in the second time mapping dictionary belong to the same second detection content data. Wherein, when such an embodiment is implemented, the second time mapping dictionary may be constructed based on the second detected content data and the second approximate boundary data, and the second type mapping dictionary may be constructed based on the second time mapping dictionary and the second target type, so as to make the data structure in the second type mapping dictionary clearer.
In this embodiment of the present invention, a second type mapping sub-dictionary, coord _ dicht 2, may be created, and a second type mapping sub-dictionary, coord _ dicht 2 may store a pair of type key values of a second target type label2 and a second time mapping dictionary temp _ coord2, that is: coord _ fact 2[ label2] ═ temp _ coord2, a time-key value pair which can be the second current time t and the second boundary data in the second time mapping dictionary temp _ coord2, the second boundary data can be obtained by the calculated upper left corner data left _ top, upper right corner data right _ top, lower left corner data left _ bottom and lower right corner data right _ bottom of the polygon bounding box; the second time mapping dictionary temp _ coord2 may specifically be:
temp_coord["left_top"]=[left_top_coord1,left_top_coord2,...,left_top_coordn]
temp_coord["right_top"]=[right_top_coord1,right_top_coord2,...,right_top_coordn]
temp_coord[left_bottom]=[left_bottom_coord1,left_bottom_coord2,...,left_bottom_coordn]
temp_coord["right_bottom"]
=[right_bottom_coord1,right_bottom_coord2,...,right_bottom_coordn]
where n may be the maximum value of the current time in the target image data.
In addition, a method for compressing the second boundary data in the second detection content data to obtain compressed second detection content data may include the following steps: compressing the second boundary data of the second detection content data to obtain one-dimensional compressed boundary data corresponding to the second boundary data; reducing the one-dimensional compressed boundary data to obtain one-dimensional reduced boundary data corresponding to the one-dimensional compressed boundary data; calculating the one-dimensional reduction boundary data to obtain compressed second boundary data; and updating the second detection content through the compressed second boundary data to obtain compressed second detection content data. By implementing the implementation manner, the second boundary data in the second detection content data can be compressed to obtain one-dimensional compressed boundary data, the one-dimensional compressed boundary data is reduced to obtain one-dimensional reduced boundary data, and the one-dimensional reduced boundary data can be calculated to obtain the compressed second boundary data, so that the second boundary data in the second detection content data is more accurate.
Further, the method of compressing the second boundary data of the second detected content data to obtain one-dimensional compressed boundary data corresponding to the second boundary data may include the following steps: acquiring the width and the height of the target image data, and determining a compression step length; calculating to obtain an initialization length based on the width and the height of the target image data and the compression step length; creating one-dimensional compression boundary data corresponding to the second boundary data, wherein the length of the one-dimensional compression boundary data is the initialization length; performing compression calculation on the second boundary data of the second detection content data based on the compression step and the width of the original image data to obtain a first index value corresponding to the second boundary data; and calculating to obtain the compression boundary data corresponding to the first index value in the one-dimensional compression boundary data based on the second boundary data and the first index value. By implementing the embodiment, the second boundary data can be compressed based on the width sum and the compression step size of the target image data, so that the obtained one-dimensional compressed boundary data is more accurate.
In the embodiment of the present invention, the width w and the height h of the target image data are obtained, and a specified compression step m is obtained, where the compression step m may be an interval between adjacent pixels, and a one-dimensional compression boundary data count _ list corresponding to the second boundary data may be created, an initial length Len1 of the one-dimensional compression boundary data count _ list may be calculated based on the width w and the height h of the target image data and the compression step m, and the initial length Len1 may be calculated in a manner that:
Figure BDA0003114454900000141
wherein the content of the first and second substances,
Figure RE-GDA0003144873470000142
and
Figure RE-GDA0003144873470000143
the initial value of the one-dimensional compression boundary data count _ list is defaulted to 0;
and traversing the second boundary data to obtain each coordinate point of the second boundary data, where x is horizontal axis data of the coordinate point of the second boundary data, and y is vertical axis data of the coordinate point of the second boundary data, and further performing a compression operation on the coordinate point of the second boundary data to obtain a first index value index1 corresponding to the second boundary data, where the first index value index1 may be calculated by:
Figure BDA0003114454900000151
adding 1 to the number of the index1 in the one-dimensional compressed boundary data count _ list, wherein the relationship is as follows: the count _ list [ index1] ═ count _ list [ index1] +1, and after traversing each coordinate point of the second boundary data, the compressed one-dimensional compressed boundary data count _ list can be obtained.
Further, the method for performing reduction processing on the one-dimensional compressed boundary data to obtain one-dimensional reduced boundary data corresponding to the one-dimensional compressed boundary data may include the following steps: creating one-dimensional reduction boundary data corresponding to the one-dimensional compression boundary data, wherein the length of the one-dimensional reduction boundary data is the product of the width and the height of the original image data; performing reduction calculation on the one-dimensional compression boundary data based on the compression step length and the width of the target image data to obtain a second index value corresponding to the one-dimensional compression boundary data; and calculating to obtain reduction boundary data corresponding to the second index value in the one-dimensional reduction boundary data based on the one-dimensional compression boundary data and the second index value. By implementing the implementation mode, the one-dimensional reduction boundary data corresponding to the one-dimensional compression boundary data can be obtained, the one-dimensional compression boundary data can be reduced and calculated based on the compression step length and the width of the target image data to obtain the second index value, the one-dimensional reduction boundary data corresponding to the one-dimensional compression boundary data can be determined based on the second index value, and the accuracy of the one-dimensional reduction boundary data is improved.
In the embodiment of the invention, because the one-dimensional compressed boundary data count _ list is compressed data, the one-dimensional compressed boundary data count _ list is restored to obtain a real-scale moving track; a one-dimensional reduction boundary data count _ list _ raw corresponding to the one-dimensional compression boundary data count _ list may be created, where the initial length Len2 of the one-dimensional reduction boundary data count _ list _ raw is: the Len2 is h w, and the data of the one-dimensional reduction boundary data count _ list _ raw is defaulted to 0; and traversing the one-dimensional compressed boundary data count _ list, and if the data in the one-dimensional compressed boundary data count _ list is greater than 0, performing operation on a second index value index2 corresponding to the one-dimensional compressed boundary data count _ list:
Figure BDA0003114454900000152
wherein the content of the first and second substances,
Figure RE-GDA0003144873470000153
and
Figure RE-GDA0003144873470000154
defining as a following rounding symbol, i is an index of a current traversal array count _ list, mod is a remainder symbol, assigning 2 th elements of an index in a one-dimensional reduction boundary data count _ list _ raw, and the value is the ith element in the one-dimensional compression boundary data count _ list, wherein the relationship is as follows:
count_list_new[index2]=count_list[i]+1
after traversing the one-dimensional compressed boundary data count _ list, the one-dimensional restored boundary data count _ list _ raw of the actual target image data size can be obtained.
Further, the method for calculating the one-dimensional reduction boundary data to obtain the compressed second boundary data may include the following steps: traversing the one-dimensional reduction boundary data based on time sequence to obtain index data, wherein the one-dimensional reduction boundary data corresponding to the index data is larger than a preset value; calculating to obtain compressed transverse moving data and longitudinal moving data based on the index data and the one-dimensional reduction boundary data; and generating compressed second boundary data through the transverse moving data and the longitudinal moving data. By implementing the implementation mode, the one-dimensional reduction boundary data can be screened to obtain the index data, and the compressed transverse moving data and longitudinal moving data can be calculated based on the index data and the one-dimensional reduction boundary data, so that the compressed transverse moving data and longitudinal moving data serve as the second boundary data, and the reliability of the second boundary data is ensured.
In the embodiment of the present invention, the one-dimensional reduction boundary data count _ list _ raw is traversed based on a time sequence, the index data index _ arr composed of index values of data greater than 0 in the one-dimensional reduction boundary data count _ list _ raw is further traversed to obtain the index value index, and the compressed transverse movement data x _ arr _ new, which is the index mod w and the longitudinal movement data, is obtained by calculation based on the index value index and the one-dimensional reduction boundary data count _ list _ raw
Figure RE-GDA0003144873470000155
As an optional implementation manner, the manner of calculating, based on second boundary data in the compressed second detected content data, approximate boundary data corresponding to the second boundary data may include the following steps: acquiring vertex movement data of a target corresponding to each second current moment from second boundary data of the compressed second detection content data; calculating a distance of each of the second boundary data from the vertex movement data; and calculating to obtain approximate boundary data corresponding to the second boundary data based on the second boundary data with the minimum distance. By implementing the implementation method, the vertex movement data of the target corresponding to each second current time can be acquired from the second boundary data, and the approximate boundary data is obtained by calculation based on the second boundary data with the minimum distance from the second boundary data to the vertex movement data, so that the reliability of the approximate boundary data is improved.
In the embodiment of the present invention, traversing second boundary data of second detected content data of the polygon bounding box, and obtaining a horizontal array x _ arr and a vertical array y _ arr in the second boundary data of the polygon bounding box, the contents are as follows:
x_arrn=[x1,x2,…,xn]
y_arrn=[y1,y2,...,yn]
four vertex coordinates of the target corresponding to each second current time may be acquired from the second boundary data of the compressed second detection content data:
x_left=argmin(x_arr)
x_right=argmax(x_arr)
y_top=argmax(y_arr)
y_bottom=argmin(y_arr)
wherein argmin is a function for obtaining the minimum value of the array, argmax is a function for obtaining the maximum value of the array, and then four vertex movement data can be determined based on four vertex coordinates:
left_top=[x_left,y_top]
left_bottom=[x_left,y_bottom]
right_top=[x_right,y_top]
right_bottom=[x_right,y_bottom]
in addition, the distance between each node in the horizontal array x _ arr and the vertical array y _ arr in the second boundary data of the polygon bounding box to the four vertex movement data can be calculated, and the formula is as follows:
Figure BDA0003114454900000161
{i:0<i<en},p∈[left_top,right_top,left_top,right_bottom]
the result distance (p) can be expressedi) Respectively storing to: an upper left array left _ top _ arr, an upper right array right _ top _ arr, a lower left array left _ bottom _ arr, and a lower right array right _ bottom _ arr.
Optionally, the method for calculating, based on the second boundary data with the minimum distance, approximate boundary data corresponding to the second boundary data, where one of the objects corresponds to four vertex movement data, may include the following steps: based on the second boundary data with the minimum distance, four boundary data corresponding to the target in the second boundary data are obtained through calculation; and when the four pieces of boundary data are all different data or three pieces of boundary data are different in the four pieces of boundary data, determining the four pieces of boundary data as approximate boundary data. By implementing the implementation mode, the four boundary data corresponding to the target in the second boundary data can be obtained through calculation based on the second boundary data with the minimum distance, and only when the four boundary data meet the preset rule, the four boundary data can be determined as the approximate boundary data, so that the accuracy of the approximate boundary data is ensured.
In the embodiment of the present invention, four boundary data with the minimum distance from the four vertex movement data are sequentially determined from the upper left array left _ top _ arr, the upper right array right _ top _ arr, the lower left array left _ bottom _ arr, and the lower right array right _ bottom _ arr, specifically: obtaining an index left _ top _ index of data with the minimum distance in an upper left array left _ top _ arr, and substituting the index left _ top _ index into x _ arr and y _ arr to obtain a coordinate left _ top _ coord of boundary data closest to the upper left vertex movement data, wherein the relationship is as follows:
left_top_coord=(x_arr[left_top_index],y_arr[left_top_index])
obtaining the index right _ top _ index of the data with the minimum distance in the upper right array right _ top _ arr, and substituting the index right _ top _ index into x _ arr and y _ arr to obtain the coordinate right _ top _ coord of the boundary data closest to the upper right vertex movement data, wherein the relationship is as follows:
right_top_coord=(x_arr[right_top_index],y_arr[right_top_index])
obtaining an index left _ bottom _ index of data with the minimum distance in the lower left array left _ bottom _ arr, and substituting the index left _ bottom _ index into x _ arr and y _ arr to obtain a coordinate left _ bottom _ co of boundary data closest to the lower left vertex movement data, wherein the relationship is as follows:
left_bottom_coord=(x_arr[left_bottom_index],y_arr[left_bottom_index])
the index right _ bottom _ index of the data with the smallest distance in the right lower array right _ bottom _ arr is obtained, and the index right _ bottom _ index is substituted into x _ arr and y _ arr to obtain the coordinate right _ bottom _ coord of the boundary data closest to the right lower vertex movement data.
right_bottom_coord=(x_arr[right_bottom_index],y_arr[right_bottom_index])
Wherein, four boundary data closest to the vertex of the polygon bounding box are compared: left _ top _ coord, right _ top _ coord, left _ bottom _ coord and right _ bottom _ coord, if the four boundary data are in different coordinates, the polygonal bounding box is a quadrilateral bounding box, and the four boundary data are determined as approximate boundary data; if two pieces of boundary data of the four pieces of boundary data are in the same coordinate, the polygonal surrounding frame is a triangular surrounding frame, and the three pieces of boundary data with different coordinates are determined as approximate boundary data; if the four pieces of boundary data contain only two different coordinates, the polygon bounding box is not processed.
Step S504, based on the type mapping dictionary, obtaining the bounding box data marked at the target position in the target image data.
By implementing the step S504, each target identified in the target image data may be identified by the bounding box, and the type mapping dictionary may be constructed according to each target and its bounding box data, so that the bounding box data corresponding to the position of the target to be used may be acquired from the type mapping dictionary, and the accuracy of the acquired bounding box data is ensured.
Step S505, calculating to obtain first central point data at the target position based on the bounding box data;
in the embodiment of the invention, when the bounding box is a triangle, the coordinates corresponding to the first central point data of the triangle bounding box are the coordinates of the intersection point of straight lines generated from each vertex of the triangle to the middle point of the opposite side. When the bounding box is a quadrangle, the midpoints of four sides, namely the left middle point lm of the quadrangle, the right middle point rm of the quadrangle, the upper middle point tm of the quadrangle and the lower middle point bm of the quadrangle, can be calculated first, lm and rm can be connected to obtain a straight line L1, tm and bm can be connected to obtain a straight line L2, and the coordinates of the intersection point of L1 and L2 are determined as the coordinates corresponding to the first central point data of the quadrangle bounding box.
Step S506, performing data fitting and interpolation operation on the first central point data to obtain second central point data corresponding to the first central point data at the target position;
step S507, selecting a reference object from the target identified by the target image data;
step S508, updating the second central point data corresponding to the target of the operating device identified by the target image data based on the second central point data corresponding to the reference object, and calculating the movement trajectory of the operating device based on the updated second central point data corresponding to the target of the operating device.
Referring to fig. 8 and 9 together, fig. 8 is a schematic diagram illustrating second boundary data of a polygon bounding box corresponding to a plurality of objects in original image data output according to an embodiment of the present invention; wherein "●" represents a schematic position diagram of the second boundary data of the polygon bounding box (i.e. a schematic position diagram of four vertices of the polygon bounding box) for identifying the target of the yellow cauliflower tumor; "a" indicates a schematic position diagram that may be second boundary data of a polygon bounding box identifying this target of the electrotome (i.e., a schematic position diagram of four vertices of the polygon bounding box); "■" indicates a position diagram of second boundary data of a polygon bounding box (i.e. a position diagram of four vertices of the polygon bounding box) for identifying an object of tissue after cutting.
FIG. 9 is a diagram illustrating bounding box data of polygon bounding boxes corresponding to a plurality of objects in raw image data according to an embodiment of the present invention; FIG. 9 is a graph of second boundary data for the polygon bounding box corresponding to the yellow cauliflower tumor contained in FIG. 8, resulting in a polygon bounding box identifying the yellow cauliflower tumor; second boundary data of the polygonal surrounding frame corresponding to the electrotome contained in the graph 8 are connected to form the polygonal surrounding frame for marking the electrotome; second boundary data of the polygon bounding box corresponding to the cut tissue included in fig. 8 is also connected, and a polygon bounding box for identifying the cut tissue is formed.
Referring to fig. 10 and fig. 11 together, fig. 10 is a schematic interface diagram of a clinical artificial intelligence assistance system according to an embodiment of the invention, and fig. 11 is a schematic interface diagram of an operating device inspection output by the clinical artificial intelligence assistance system according to the embodiment of the invention; the clinical artificial intelligence auxiliary system corresponding to fig. 10 may be configured to control the operation device to intelligently identify the intraluminal lesion of the patient, and fig. 11 may be an examination interface when the operation device intelligently identifies the intraluminal lesion of the patient.
Specifically, fig. 10 may include 5 regions, where the region (i) may be a time selection region, the region (ii) may be a patient search region, the region (iii) may be a system management region, the region (iv) may be an abnormal tissue region, and the region (iv) may be a parameter setting region; the region (r) can select the viewed patient according to the input time and display the patient information to the system user (such as a doctor); the patient searching sub-area can quickly inquire the case information of the patient for the patient name input by a system user, and the patient information output sub-area can output the searched case information of the patient; the area III at least comprises a newly-built patient sub-area, a system connection sub-area, a system information sub-area and a system operation sub-area, wherein the newly-built patient sub-area can establish patient case information according to the patient information input by a system user when the case information of the current patient is not inquired or the system is not connected with a workstation system; the system connection sub-area can connect the current system with the workstation system; the system information sub-area can output information about the version number, company recommendation, copyright statement and the like of the clinical artificial intelligence auxiliary system; the system operation sub-area can respond to instructions input by a system user, so that the operations of minimizing, switching or closing the size of a system interface and the like are realized; the area IV at least comprises a name sub-area and an abnormal tissue screenshot temporary storage sub-area, wherein the name sub-area can output the name of the abnormal tissue in the abnormal tissue screenshot temporary storage area, and the abnormal tissue screenshot temporary storage area can output a screenshot of a suspected lesion area in the body of the patient; the region (c) at least comprises a transparency subregion, an automatic newly-built subregion, an identification probability subregion, a screenshot probability subregion, an automatic screenshot subregion, a video playing subregion, a live broadcast subregion and a storage subregion, wherein the transparency subregion can be used for setting the transparency of a mask covering a lesion region in a patient body; the automatic new sub-area can be used for automatically creating patient information with an undefined name when a workstation is not connected or a new patient is not recorded for direct examination; the identification probability subregion can be used for setting the identification probability of the clinical artificial intelligence auxiliary system to the lesion region of the patient; the screenshot probability sub-region can be used for setting screenshot probability, and when the lesion probability of the lesion region identified by the system reaches the screenshot probability, screenshot is carried out on the lesion region; the automatic screenshot sub-area can be used for setting an automatic toxic lesion area of the system to perform screenshot; the video playing sub-area can be used for selecting a video of a patient examination process needing to be checked, and outputting and playing the video; the live broadcast sub-area can be used for directly opening a live broadcast page when the system is in a connection state with the workstation and after patient information is transmitted; the memory sub-region can be used for selecting image information needing to be saved/deleted for saving/deleting.
The clinical artificial intelligence assistance system shown in fig. 10 can control the operation device and can perform any of the steps of the target boundary recording method based on target detection in fig. 1 and 2.
In addition, fig. 11 may include 4 regions, where a region may be a result identification region, a region B may be an endoscope view region, a region C may be a prompt region, and a region D may be a picture temporary storage region, where: the area A at least comprises a lesion probability subregion, an electrotome bleeding probability subregion and a visual field prompting subregion, wherein the lesion probability subregion can be used for judging the lesion area according to preset identification probability and outputting lesion probability and conclusion; the electrotome probability subregion can be used for identifying the surgical instrument and outputting the identification probability and conclusion of the surgical instrument; the electric burn bleeding probability subregion can be used for judging a bleeding region or a burn region according to a preset identification probability and outputting a bleeding probability/burn probability and a conclusion; the visual field prompt subregion can be used for prompting the visual field definition of the current endoscope; the area B can output an intracavity image of the patient acquired by an endoscope, and when a lesion area, a bleeding area or a burn area is identified, a mask is output on the lesion area, the bleeding area or the burn area according to preset transparency; the region C can detect the intracavity image of the patient collected by the endoscope and output the conclusion obtained by the detection; region D may store the screenshot of the patient's intracavity image derived from the system screenshot to and output in region D.
The method and the device can calculate and obtain more accurate movement track of the operating equipment based on the updated central point data of the target corresponding to the operating equipment. In addition, the method can further enable the obtained second central point data to be more accurate and coherent. In addition, the method can also enable the error of the obtained first fitting center point data to be smaller. In addition, the invention can also make the first fitting center point data more accurate. In addition, the invention can also make the second central point data generated according to the transverse data of the interpolation central point and the longitudinal data of the interpolation central point more coherent. In addition, the invention can also make the obtained moving track of the operating equipment more clear and easier to understand. In addition, the invention can also increase the diversity of the calculated movement tracks. In addition, the invention can also improve the accuracy of the moving track of the operating equipment. In addition, the invention can also improve the accuracy of the movement track of the acquisition equipment. In addition, the invention can also improve the accuracy of the moving track of the operation target. In addition, the method and the device can also ensure the accuracy of the acquired bounding box data. In addition, the invention can make the information of the obtained type mapping dictionary more comprehensive. In addition, the invention can also improve the diversity of the type mapping dictionary. In addition, the invention can also ensure the comprehensiveness of the data contained in the first type mapping dictionary. In addition, the data structure in the first type mapping dictionary can be clearer. In addition, the invention can also ensure the standard of the data contained in the second type mapping dictionary. In addition, the invention can also make the data structure in the second type mapping dictionary clearer. In addition, the present invention can make the second boundary data in the second detected content data more accurate. In addition, the invention can also make the obtained one-dimensional compression boundary data more accurate. In addition, the method can also improve the accuracy of the one-dimensional reduction boundary data. In addition, the invention can also ensure the reliability of the second boundary data. In addition, the invention can also improve the reliability of the approximate boundary data. In addition, the invention can also ensure the accuracy of the approximate boundary data.
Exemplary devices
Having described the method of an exemplary embodiment of the present invention, next, a movement trace recording apparatus of an operation device of an exemplary embodiment of the present invention will be described with reference to fig. 12, the apparatus including:
an obtaining unit 1201, configured to obtain bounding box data identified at the target position in target image data;
a calculating unit 1202, configured to calculate to obtain first center point data at the target position based on the bounding box data obtained by the obtaining unit 1201;
an operation unit 1203, configured to perform data fitting and interpolation operation on the first center point data obtained by the calculation unit 1202 to obtain second center point data corresponding to the first center point data at the target position;
a selecting unit 1204, configured to select a reference object from the target identified by the target image data;
an updating unit 1205, configured to update, based on the second central point data in the operation unit 1203 corresponding to the reference object selected by the selecting unit 1204, the second central point data corresponding to the target of the operation device identified by the target image data, and calculate, based on the updated second central point data corresponding to the target of the operation device, a moving trajectory of the operation device.
Exemplary Medium
Having described the method and apparatus of the exemplary embodiments of the present invention, next, a computer-readable storage medium of the exemplary embodiments of the present invention is described with reference to fig. 13, please refer to fig. 13, which illustrates a computer-readable storage medium being an optical disc 130 having a computer program (i.e., a program product) stored thereon, where the computer program, when executed by a processor, implements the steps described in the above-mentioned method embodiments, for example, acquiring bounding box data of target image data that is identified at the target position; calculating to obtain first central point data at the target position based on the bounding box data; performing data fitting and interpolation operation on the first central point data to obtain second central point data corresponding to the first central point data at the target position; selecting a reference object from the target identified by the target image data; updating second central point data corresponding to the target of the operating equipment identified by the target image data based on the second central point data corresponding to the reference object, and calculating to obtain the moving track of the operating equipment based on the updated second central point data corresponding to the target of the operating equipment; the specific implementation of each step will not be repeated here.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, or other optical and magnetic storage media, which are not described in detail herein.
Exemplary computing device
Having described the method, medium, and apparatus of exemplary embodiments of the present invention, a computing device for movement trace recording of an operating device of exemplary embodiments of the present invention is next described with reference to fig. 14.
FIG. 14 illustrates a block diagram of an exemplary computing device 140, which computing device 140 may be a computer system or server, suitable for use in implementing embodiments of the present invention. The computing device 140 shown in FIG. 14 is only one example and should not impose any limitations on the functionality or scope of use of embodiments of the present invention.
As shown in fig. 14, components of computing device 140 may include, but are not limited to: one or more processors or processing units 1401, a system memory 1402, and a bus 1403 connecting the various system components (including the system memory 1402 and the processing unit 1401).
Computing device 140 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computing device 140 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 1402 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)14021 and/or cache memory 14022. The computing device 140 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, ROM14023 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 14, and commonly referred to as a "hard drive"). Although not shown in FIG. 14, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from and writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to the bus 1403 via one or more data media interfaces. Included in system memory 1402 may be at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the invention.
A program/utility 14025 having a set (at least one) of program modules 14024 may be stored, for example, in system memory 1402, and such program modules 14024 include but are not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment. Program modules 14024 generally carry out the functions and/or methodologies of embodiments of the present invention as described herein.
Computing device 140 may also communicate with one or more external devices 1404 (e.g., keyboard, pointing device, display, etc.). Such communication may occur via input/output (I/O) interfaces 605. Also, computing device 140 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) through network adapter 1406. As shown in FIG. 14, the network adapter 1406 communicates with other modules of the computing device 140, such as the processing unit 1401, over a bus 1403. It should be appreciated that although not shown in FIG. 14, other hardware and/or software modules may be used in conjunction with computing device 140.
The processing unit 1401 executes various functional applications and data processing, for example, acquiring bounding box data identified at the target position in target image data, by running a program stored in the system memory 1402; calculating to obtain first central point data at the target position based on the bounding box data; performing data fitting and interpolation operation on the first central point data to obtain second central point data corresponding to the first central point data at the target position; selecting a reference object from the target identified by the target image data; and updating second central point data corresponding to the target of the operating equipment identified by the target image data based on the second central point data corresponding to the reference object, and calculating the movement track of the operating equipment based on the updated second central point data corresponding to the target of the operating equipment. The specific implementation of each step is not repeated here. It should be noted that although in the above detailed description several units/modules or sub-units/sub-modules of the movement trace recording means of the operating device are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module according to embodiments of the invention. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
In the description of the present invention, it should be noted that the terms "first", "second", and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided by the present invention, it should be understood that the disclosed system, apparatus and method can be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units is merely a logical division, and there may be other divisions in actual implementation, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention or a part thereof, which essentially contributes to the prior art, can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
Through the above description, the embodiments of the present invention provide the following technical solutions, but are not limited thereto:
1. a movement track recording method of an operating device comprises the following steps: acquiring bounding box data marked at the target position in target image data; calculating to obtain first central point data at the target position based on the bounding box data; performing data fitting and interpolation operation on the first central point data to obtain second central point data corresponding to the first central point data at the target position; selecting a reference object from the target identified by the target image data; and updating second central point data corresponding to the target of the operating equipment identified by the target image data based on the second central point data corresponding to the reference object, and calculating to obtain the moving track of the operating equipment based on the updated second central point data corresponding to the target of the operating equipment.
2. The method for recording a movement trajectory of an operating device according to claim 1, which performs data fitting and interpolation on the first center point data to obtain second center point data corresponding to the first center point data at the target position, includes: performing data fitting on the first central point data to obtain first fitting central point data corresponding to the first central point data; and performing interpolation operation on the first fitting center point data to obtain second center point data corresponding to the first center point data at the target position.
3. The method for recording a movement trajectory of an operating device according to claim 2, which performs data fitting on the first center point data to obtain first fitting center point data corresponding to the first center point data, includes: denoising the first central point data based on a denoising algorithm to obtain denoised first central point data; and performing data fitting on the denoised first central point data to obtain first fitting central point data.
4. The method for recording a moving trajectory of an operating device according to claim 3, wherein the target image data includes a current time corresponding to the first central point data, the first central point data includes central point horizontal data and central point longitudinal data, and the data fitting is performed on the denoised first central point data to obtain first fitting central point data, including: performing data fitting on the central point transverse data in the denoised first central point data based on the current moment to obtain a transverse fitting function corresponding to the central point transverse data; calculating to obtain transverse fitting data based on the current time and the transverse fitting function; performing data fitting on the central point longitudinal data in the denoised first central point data based on the current moment to obtain a longitudinal fitting function corresponding to the central point longitudinal data; calculating to obtain longitudinal fitting data based on the current time and the longitudinal fitting function; and determining the transverse fitting data and the longitudinal fitting data as first fitting central point data corresponding to the first central point data.
5. The movement track recording method of an operating device according to any one of claims 2 to 4, which performs interpolation operation on the first fitting center point data to obtain second center point data corresponding to the first center point data at the target position, includes: calculating to obtain a transverse smooth spline curve coefficient based on the current moment, the central point transverse data in the first fitting central point data and preset parameters; calculating to obtain a longitudinal smooth spline curve coefficient based on the current moment, the longitudinal data of the center point in the first fitting center point data and the preset parameter; selecting a maximum time and a minimum time from current times contained in the target image data; constructing time data based on the maximum time and the minimum time; performing interpolation operation on the transverse data of the central point based on the time data and the transverse smooth spline curve coefficient to obtain transverse data of the interpolated central point; performing interpolation operation on the longitudinal data of the central point based on the time data and the longitudinal smooth spline curve coefficient to obtain the longitudinal data of the interpolated central point; and obtaining second central point data corresponding to the first central point data at the target position based on the transverse data of the interpolation central point and the longitudinal data of the interpolation central point.
6. The method for recording a movement trajectory of an operating device according to claim 1, wherein the second center point data corresponding to the target of the operating device identified by the target image data is updated based on the second center point data corresponding to the reference object, and the movement trajectory of the operating device is calculated based on the updated second center point data corresponding to the target of the operating device, the method comprising: updating second center point data corresponding to targets except the reference object in the target image data based on the second center point data corresponding to the reference object to obtain updated moving center point data corresponding to the targets; and calculating to obtain the movement track of the operating equipment based on the movement center point data corresponding to the target of the operating equipment in the updated target image data.
7. The method for recording a moving track of an operating device according to claim 6, wherein the target other than the reference object is at least one of the operating device, an acquiring device for acquiring the original image data, and an operating target corresponding to a target position in the target image data.
8. The method for recording a movement trajectory of an operating device according to claim 7, wherein when the target is an operating device, updating second center point data corresponding to a target other than the reference object in the target image data based on the second center point data corresponding to the reference object to obtain updated movement center point data corresponding to the target, includes: acquiring second central point data corresponding to the reference object and second central point data corresponding to the operating equipment from the target image data; and updating second center point data corresponding to the operating equipment based on the second center point data corresponding to the reference object and the second center point data corresponding to the operating equipment to obtain updated moving center point data corresponding to the operating equipment.
9. The method for recording a movement trajectory of an operating device according to claim 7, wherein when the target is a collection device, updating second center point data corresponding to a target other than the reference object in the target image data based on the second center point data corresponding to the reference object to obtain updated movement center point data corresponding to the target, includes: acquiring second central point data corresponding to the reference object and second central point data corresponding to the acquisition equipment from the target image data; acquiring the width and the height of the target image data; and updating the second central point data corresponding to the acquisition equipment based on the second central point data corresponding to the reference object, the second central point data corresponding to the acquisition equipment and the width and height of the target image data to obtain updated mobile central point data corresponding to the acquisition equipment.
10. The method for recording a movement trajectory of an operating device according to claim 7, wherein when the target is an operating target, updating second center point data corresponding to a target other than the reference object in the target image data based on the second center point data corresponding to the reference object to obtain updated movement center point data corresponding to the target, includes: acquiring second central point data corresponding to the reference object and second central point data corresponding to the operation target from the target image data; and updating second center point data corresponding to the operation target based on the second center point data corresponding to the reference object and the second center point data corresponding to the operation target to obtain updated moving center point data corresponding to the operation target.
11. The method for recording a movement trajectory of an operating device according to claim 1, which acquires bounding box data identified at a target position in target image data, includes: and acquiring bounding box data which is identified at the target position in the target image data based on the type mapping dictionary.
12. The movement trace recording method of an operating device according to claim 11, before obtaining bounding box data identified at the target position in target image data based on a type mapping dictionary, the method further comprising: performing target detection on original image data of the operating equipment through an example segmentation model to obtain target image data of a rectangular surrounding frame displayed at the target position and/or a polygonal surrounding frame displayed at the target position in the original image data; traversing a rectangular bounding box displayed at the target position and/or a polygonal bounding box displayed at the target position in the target image data based on time sequence to construct a first type mapping sub-dictionary and/or a second type mapping sub-dictionary; generating a type mapping dictionary based on the first type mapping sub-dictionary and/or the second type mapping sub-dictionary.
13. The method for recording a movement trajectory of an operating device according to claim 12, wherein the step of constructing a first-type mapping sub-dictionary and/or a second-type mapping sub-dictionary based on a time-series traversal of a rectangular bounding box displayed at the target position and/or a polygonal bounding box displayed at the target position in the target image data includes: traversing a rectangular bounding box displayed at the target position in the target image data based on time sequence, and constructing a first type mapping sub-dictionary containing bounding box data of the rectangular bounding box; and/or traversing a polygon bounding box displayed at the target position in the target image data based on time sequence, and constructing a second type mapping sub-dictionary containing bounding box data of the polygon bounding box.
14. The method for recording a movement trajectory of an operating device according to claim 13, wherein the step of constructing a first type mapping sub-dictionary including bounding box data of a rectangular bounding box based on a time-series traversal of the rectangular bounding box displayed at the target position in the target image data includes: acquiring first detection content data corresponding to a rectangular bounding box displayed at the target position based on time sequence from the target image data; the first detection content data at least comprises a first target type of a target identified by the rectangular bounding box, a first current time and first boundary data; the first boundary data is bounding box data of the rectangular bounding box; traversing the first detection content data based on time sequence to construct a first type mapping sub-dictionary; the first type mapping sub-dictionary at least comprises a first time mapping dictionary, the first time mapping dictionary is in one-to-one correspondence with the first target type, and the first target types corresponding to any two first time mapping dictionaries are different.
15. The method for recording a movement trajectory of an operating device according to claim 14, wherein the step of constructing a first type mapping sub-dictionary based on time-series traversal of the first detection content data includes: traversing the first detection content data based on time sequence to construct a first time mapping dictionary, wherein the first time mapping dictionary comprises at least one time key value pair of first current time and first boundary data, and the first current time and the first boundary data in the time key value pair are obtained from the same first detection content data; and constructing a first type mapping sub-dictionary based on the first detection content data and the first time mapping dictionary, wherein the first type mapping sub-dictionary comprises at least one type key-value pair of a first target type and the first time mapping dictionary, and the first target type and a first current time and first boundary data in the first time mapping dictionary belong to the same first detection content data.
16. The method for recording a movement trajectory of an operating device according to claim 13, wherein the step of constructing a second type mapping sub-dictionary including bounding box data of the polygon bounding box based on a time-series traversal of the polygon bounding box displayed at the target position in the target image data includes: acquiring second detection content data corresponding to a polygon bounding box displayed at the target position based on time sequence from the target image data; the second detection content data at least comprises a second target type of the target identified by the polygon bounding box, a second current time and second boundary data; the second boundary data is bounding box data of the polygon bounding box; compressing the second boundary data in the second detection content data to obtain compressed second detection content data; calculating to obtain approximate boundary data corresponding to second boundary data based on the second boundary data in the compressed second detection content data; constructing a second type mapping sub-dictionary based on a time-series traversal of the second detected content data and the approximate boundary data; the second type mapping sub-dictionary at least comprises a second time mapping dictionary, the second time mapping dictionary is in one-to-one correspondence with the second target type, and the second target types corresponding to any two second time mapping dictionaries are different.
17. The method for recording a movement trace of an operating device according to claim 16, wherein the step of constructing a second type mapping sub-dictionary based on time-series traversal of the second detection content data and the approximate boundary data includes: traversing the second detection content data and the approximate boundary data based on time sequence to construct a second time mapping dictionary, wherein the second time mapping dictionary comprises at least one time key value pair of a second current time and the approximate boundary data, and the second current time and the approximate boundary data in the time key value pair are obtained from the same second detection content data; and constructing a second type mapping sub-dictionary based on the second detection content data and the second time mapping dictionary, wherein the second type mapping sub-dictionary comprises at least one type key value pair of a second target type and the second time mapping dictionary, and the second target type and second current time and approximate boundary data in the second time mapping dictionary belong to the same second detection content data.
18. The moving track recording method of the operating device according to claim 16 or 17, which compresses the second boundary data in the second detected content data to obtain compressed second detected content data, includes: compressing the second boundary data of the second detection content data to obtain one-dimensional compressed boundary data corresponding to the second boundary data; reducing the one-dimensional compressed boundary data to obtain one-dimensional reduced boundary data corresponding to the one-dimensional compressed boundary data; calculating the one-dimensional reduction boundary data to obtain compressed second boundary data; and updating the second detection content through the compressed second boundary data to obtain compressed second detection content data.
19. The method for recording a movement trajectory of an operating device according to claim 18, which performs compression processing on the second boundary data of the second detected content data to obtain one-dimensional compressed boundary data corresponding to the second boundary data, includes: acquiring the width and the height of the target image data, and determining a compression step length; calculating to obtain an initialization length based on the width and the height of the target image data and the compression step length; creating one-dimensional compression boundary data corresponding to the second boundary data, wherein the length of the one-dimensional compression boundary data is the initialization length; performing compression calculation on the second boundary data of the second detection content data based on the compression step and the width of the original image data to obtain a first index value corresponding to the second boundary data; and calculating to obtain the compression boundary data corresponding to the first index value in the one-dimensional compression boundary data based on the second boundary data and the first index value.
20. The method for recording a movement trajectory of an operating device according to claim 19, which performs reduction processing on the one-dimensional compressed boundary data to obtain one-dimensional reduced boundary data corresponding to the one-dimensional compressed boundary data, includes: creating one-dimensional reduction boundary data corresponding to the one-dimensional compression boundary data, wherein the length of the one-dimensional reduction boundary data is the product of the width and the height of the original image data; reducing and calculating the one-dimensional compression boundary data based on the compression step length and the width of the target image data to obtain a second index value corresponding to the one-dimensional compression boundary data; and calculating to obtain reduction boundary data corresponding to the second index value in the one-dimensional reduction boundary data based on the one-dimensional compression boundary data and the second index value.
21. The method for recording a movement trajectory of an operating device according to claim 20, wherein the calculating the one-dimensional restored boundary data to obtain compressed second boundary data includes: traversing the one-dimensional reduction boundary data based on time sequence to obtain index data, wherein the one-dimensional reduction boundary data corresponding to the index data is larger than a preset value; calculating to obtain compressed transverse moving data and longitudinal moving data based on the index data and the one-dimensional reduction boundary data; and generating compressed second boundary data through the transverse movement data and the longitudinal movement data.
22. The method for recording a movement trace of an operating device according to claim 16 or 17, wherein the step of calculating approximate boundary data corresponding to second boundary data based on the second boundary data in the compressed second detected content data includes: acquiring vertex movement data of a target corresponding to each second current moment from second boundary data of the compressed second detection content data; calculating a distance of each of the second boundary data from the vertex movement data; and calculating to obtain approximate boundary data corresponding to the second boundary data based on the second boundary data with the minimum distance.
23. The method for recording a movement locus of an operating device according to claim 22, wherein one of the objects corresponds to four pieces of the vertex movement data, and based on the second boundary data with the minimum distance, approximate boundary data corresponding to the second boundary data is obtained through calculation, and the method includes: based on the second boundary data with the minimum distance, four boundary data corresponding to the target in the second boundary data are obtained through calculation; and when the four pieces of boundary data are all different data or three pieces of boundary data are different in the four pieces of boundary data, determining the four pieces of boundary data as approximate boundary data.
24. A movement trace recording apparatus that operates a device, comprising: the acquisition unit is used for acquiring bounding box data which is marked at the target position in the target image data; a calculation unit, configured to calculate, based on the bounding box data, first center point data at the target location; the operation unit is used for performing data fitting and interpolation operation on the first central point data to obtain second central point data corresponding to the first central point data at the target position; the selecting unit is used for selecting a reference object from the target identified by the target image data; and the updating unit is used for updating second center point data corresponding to the target of the operating equipment identified by the target image data based on the second center point data corresponding to the reference object, and calculating the moving track of the operating equipment based on the updated second center point data corresponding to the target of the operating equipment.
25. The movement trace recording device of the operation device according to claim 24, the operation unit comprising: the fitting sub-unit is used for performing data fitting on the first central point data to obtain first fitting central point data corresponding to the first central point data; and the interpolation subunit is used for carrying out interpolation operation on the first fitting center point data to obtain second center point data corresponding to the first center point data at the target position.
26. The movement trace recording apparatus of an operating device according to claim 25, wherein the fitting subunit includes: the denoising module is used for carrying out denoising operation on the first central point data based on a denoising algorithm to obtain denoised first central point data; and the fitting module is used for performing data fitting on the denoised first central point data to obtain first fitting central point data.
27. The movement track recording device of the operating device according to claim 26, wherein the target image data includes a current time corresponding to the first central point data, and the first central point data includes a central point horizontal data and a central point vertical data, and the fitting module includes: the fitting submodule is used for performing data fitting on the central point transverse data in the denoised first central point data based on the current moment to obtain a transverse fitting function corresponding to the central point transverse data; the first calculation submodule is used for calculating to obtain transverse fitting data based on the current time and the transverse fitting function; the fitting submodule is further configured to perform data fitting on the central point longitudinal data in the first denoised central point data based on the current time to obtain a longitudinal fitting function corresponding to the central point longitudinal data; the first calculation submodule is further used for calculating to obtain longitudinal fitting data based on the current time and the longitudinal fitting function; and the determining submodule is used for determining the transverse fitting data and the longitudinal fitting data as first fitting center point data corresponding to the first center point data.
28. The movement trace recording apparatus of an operating device according to any one of claims 25 to 27, wherein the interpolation subunit includes: the first calculation module is used for calculating to obtain a transverse smooth spline curve coefficient based on the current moment, the central point transverse data in the first fitting central point data and a preset parameter; the first calculation module is further configured to calculate a longitudinal smooth spline curve coefficient based on the current time, the central point longitudinal data in the first fitting central point data, and the preset parameter; the selecting module is used for selecting the maximum time and the minimum time from the current time contained in the target image data; a first construction module for constructing time data based on the maximum time and the minimum time; the interpolation module is used for carrying out interpolation operation on the transverse data of the central point based on the time data and the transverse smooth spline curve coefficient to obtain transverse data of an interpolated central point; the interpolation module is further used for carrying out interpolation operation on the longitudinal data of the central point based on the time data and the longitudinal smooth spline curve coefficient to obtain the longitudinal data of the interpolated central point; and the first updating module is used for obtaining second central point data corresponding to the first central point data at the target position based on the transverse data of the interpolation central point and the longitudinal data of the interpolation central point.
29. The movement trace recording apparatus of an operating device according to claim 24, wherein the updating unit includes: the updating subunit is configured to update, based on second center point data corresponding to the reference object, second center point data corresponding to a target other than the reference object in the target image data, so as to obtain updated moving center point data corresponding to the target; and the calculating subunit is used for calculating the movement track of the operating equipment based on the updated movement center point data corresponding to the target type of the operating equipment in the target image data.
30. The movement track recording device of the operating device according to claim 29, wherein the target other than the reference object is at least one of the operating device, a collecting device for collecting the original image data, and an operating target corresponding to a target position in the target image data.
31. The movement trace recording apparatus of an operating device according to claim 30, wherein when the target type is an operating device, the updating subunit includes: the first acquisition module is used for acquiring second central point data corresponding to the reference object and second central point data corresponding to the operating equipment from the target image data; and the second updating module is used for updating second center point data corresponding to the operating equipment based on the second center point data corresponding to the reference object and the second center point data corresponding to the operating equipment to obtain updated mobile center point data corresponding to the operating equipment.
32. The movement track recording apparatus of operating device according to scheme 30, wherein when the target type is the acquisition device, the updating subunit further includes: the second acquisition module is used for acquiring second central point data corresponding to the reference object and second central point data corresponding to the acquisition equipment from the target image data; the second obtaining module is further configured to obtain the width and height of the target image data; and the third updating module is used for updating the second central point data corresponding to the acquisition equipment based on the second central point data corresponding to the reference object, the second central point data corresponding to the acquisition equipment and the width and height of the target image data to obtain updated mobile central point data corresponding to the acquisition equipment.
33. The movement trace recording apparatus of an operating device according to claim 30, wherein when the object type is an operation object, the updating subunit further includes: a third obtaining module, configured to obtain, from the target image data, second center point data corresponding to the reference object and second center point data corresponding to the operation target; and the fourth updating module is used for updating the second central point data corresponding to the operation target based on the second central point data corresponding to the reference object and the second central point data corresponding to the operation target to obtain the updated moving central point data corresponding to the operation target.
34. In the movement track recording apparatus of the operating device according to claim 24, the manner of acquiring the bounding box data identified at the target position in the target image data by the acquiring unit is specifically as follows: and acquiring bounding box data marked at the target position in the target image data based on the type mapping dictionary.
35. The movement trace recording apparatus of an operating device according to claim 34, further comprising: the detection unit is used for carrying out target detection on original image data of the operating equipment through an example segmentation model before the acquisition unit acquires bounding box data which is identified at the target position in the target image data based on a type mapping dictionary, so as to obtain target image data of a rectangular bounding box displayed at the target position and/or a polygonal bounding box displayed at the target position in the original image data; the construction unit is used for traversing a rectangular surrounding frame displayed at the target position and/or a polygonal surrounding frame displayed at the target position in the target image data based on time sequence to construct a first type mapping sub-dictionary and/or a second type mapping sub-dictionary; a generating unit, configured to generate a type mapping dictionary based on the first type mapping sub-dictionary and/or the second type mapping sub-dictionary.
36. The movement trace recording apparatus of an operating device according to claim 35, wherein the constructing unit includes: the first construction subunit is configured to traverse a rectangular bounding box displayed at the target position in the target image data based on a time sequence, and construct a first type mapping sub-dictionary containing bounding box data of the rectangular bounding box; and/or a second construction subunit, configured to traverse a polygon bounding box displayed at the target position in the target image data based on a time sequence, and construct a second type mapping sub-dictionary including bounding box data of the polygon bounding box.
37. The movement track recording apparatus of the operating device according to claim 36, wherein the first constructing subunit includes: a fourth obtaining module, configured to obtain, from the target image data, time-series-based first detection content data corresponding to a rectangular bounding box displayed at the target position; the first detection content data at least comprises a first target type of a target identified by the rectangular bounding box, a first current time and first boundary data; the first boundary data is bounding box data of the rectangular bounding box; the second construction module is used for traversing the first detection content data based on time sequence and constructing a first type mapping sub-dictionary; the first type mapping sub-dictionary at least comprises a first time mapping dictionary, the first time mapping dictionary is in one-to-one correspondence with the first target type, and the first target types corresponding to any two first time mapping dictionaries are different.
38. The movement trace recording apparatus of an operating device according to claim 37, wherein the second building block includes: the first construction submodule is used for traversing the first detection content data based on time sequence and constructing a first time mapping dictionary, wherein the first time mapping dictionary comprises at least one time key value pair of first current time and first boundary data, and the first current time and the first boundary data in the time key value pair are obtained from the same first detection content data; the first constructing sub-module is further configured to construct a first type mapping sub-dictionary based on the first detection content data and the first time mapping dictionary, where the first type mapping sub-dictionary includes at least one type key-value pair of a first target type and a first time mapping dictionary, and the first target type and a first current time and first boundary data in the first time mapping dictionary belong to the same first detection content data.
39. The movement track recording device of the operating apparatus according to claim 36, wherein the second constructing subunit includes: a fifth acquiring module, configured to acquire, from the target image data, second detection content data corresponding to a polygonal bounding box displayed at the target position based on a time sequence; the second detection content data at least comprises a second target type of a target identified by the polygon bounding box, a second current time and second boundary data; the second boundary data is bounding box data of the polygon bounding box; the compression module is used for compressing the second boundary data in the second detection content data to obtain compressed second detection content data; the second calculation module is used for calculating to obtain approximate boundary data corresponding to second boundary data based on the second boundary data in the compressed second detection content data; a third construction module, configured to traverse the second detection content data and the approximate boundary data based on a time sequence, and construct a second type mapping sub-dictionary; the second type mapping sub-dictionary at least comprises a second time mapping dictionary, the second time mapping dictionary is in one-to-one correspondence with the second target type, and the second target types corresponding to any two second time mapping dictionaries are different.
40. The movement track recording device of the operating apparatus according to claim 39, wherein the third building block comprises: a second constructing sub-module, configured to traverse the second detection content data and the approximate boundary data based on a time sequence, and construct a second time mapping dictionary, where the second time mapping dictionary includes at least one time key value pair of a second current time and the approximate boundary data, and the second current time and the approximate boundary data in the time key value pair are obtained from the same second detection content data; the second constructing sub-module is further configured to construct a second type mapping sub-dictionary based on the second detection content data and the second time mapping dictionary, where the second type mapping sub-dictionary includes at least one type key-value pair of a second target type and the second time mapping dictionary, and the second target type and a second current time and approximate boundary data in the second time mapping dictionary belong to the same second detection content data.
41. The movement trace recording apparatus of an operating device according to claim 39 or 40, wherein the compression module includes: the compression submodule is used for compressing the second boundary data of the second detection content data to obtain one-dimensional compression boundary data corresponding to the second boundary data; the restoring submodule is used for restoring the one-dimensional compressed boundary data to obtain one-dimensional restored boundary data corresponding to the one-dimensional compressed boundary data; the second calculation submodule is used for calculating the one-dimensional reduction boundary data to obtain compressed second boundary data; and the updating submodule is used for updating the second detection content through the compressed second boundary data to obtain compressed second detection content data.
42. The movement trace recording apparatus of an operating device according to claim 41, wherein the compression submodule includes: the acquisition structure is used for acquiring the width and the height of the target image data and determining the compression step length; the first calculation structure is used for calculating to obtain an initialization length based on the width and the height of the target image data and the compression step length; a first creating structure, configured to create one-dimensional compression boundary data corresponding to the second boundary data, where a length of the one-dimensional compression boundary data is the initialization length; the compression structure is used for carrying out compression calculation on the second boundary data of the second detection content data based on the compression step and the width of the original image data to obtain a first index value corresponding to the second boundary data; the first calculation structure is further configured to calculate, based on the second boundary data and the first index value, to obtain compressed boundary data corresponding to the first index value in the one-dimensional compressed boundary data.
43. The movement track recording device of the operating device according to claim 42, wherein the restoring sub-module includes: the second creation structure is used for creating one-dimensional reduction boundary data corresponding to the one-dimensional compression boundary data, and the length of the one-dimensional reduction boundary data is the product of the width and the height of the original image data; the reduction structure is used for carrying out reduction calculation on the one-dimensional compression boundary data based on the compression step length and the width of the target image data to obtain a second index value corresponding to the one-dimensional compression boundary data; and the second calculation structure is used for calculating to obtain the restoration boundary data corresponding to the second index value in the one-dimensional restoration boundary data based on the one-dimensional compression boundary data and the second index value.
44. The movement trace recording apparatus of an operating device according to claim 43, wherein the second calculation sub-module includes: the traversal structure is used for traversing the one-dimensional reduction boundary data based on time sequence to obtain index data, and the one-dimensional reduction boundary data corresponding to the index data is larger than a preset value; the third calculation structure is used for calculating and obtaining compressed transverse movement data and longitudinal movement data based on the index data and the one-dimensional reduction boundary data; a generating structure for generating compressed second boundary data from the lateral movement data and the longitudinal movement data.
45. The movement trace recording apparatus of an operating device according to claim 39 or 40, wherein the second calculation module includes: the acquisition submodule is used for acquiring vertex movement data of a target corresponding to each second current moment from second boundary data of the compressed second detection content data; a third calculation submodule for calculating a distance of each of the second boundary data from the vertex movement data; and the fourth calculation submodule is used for calculating to obtain approximate boundary data corresponding to the second boundary data based on the second boundary data with the minimum distance.
46. In the movement trace recording apparatus of operating device according to claim 45, one target corresponds to four vertex movement data, and the fourth calculating sub-module includes: a fourth calculation structure, configured to calculate, based on the second boundary data with the minimum distance, four boundary data corresponding to the target in the second boundary data; a determination structure configured to determine the four pieces of boundary data as approximate boundary data when the four pieces of boundary data are all different data or three pieces of boundary data are different data among the four pieces of boundary data.
47. A clinical artificial intelligence assistance system that performs the movement trace recording method of the operation device of any one of claims 1 to 23.
48. A storage medium storing a program, wherein the storage medium stores a computer program which, when executed by a processor, implements a movement trace recording method of an operating device according to any one of claims 1 to 23.
49. A computing device comprising the storage medium of scheme 48.

Claims (47)

1. A movement track recording method of an operating device comprises the following steps:
acquiring bounding box data marked at the target position in target image data;
calculating to obtain first central point data at the target position based on the bounding box data;
performing data fitting and interpolation operation on the first central point data to obtain second central point data corresponding to the first central point data at the target position;
selecting a reference object from the target identified by the target image data;
updating second central point data corresponding to the target of the operating equipment identified by the target image data based on the second central point data corresponding to the reference object, and calculating to obtain the moving track of the operating equipment based on the updated second central point data corresponding to the target of the operating equipment;
performing data fitting and interpolation operation on the first central point data to obtain second central point data corresponding to the first central point data at the target position, wherein the second central point data comprises:
performing data fitting on the first central point data to obtain first fitting central point data corresponding to the first central point data;
and performing interpolation operation on the first fitting center point data to obtain second center point data corresponding to the first center point data at the target position.
2. The movement track recording method of an operating device according to claim 1, wherein performing data fitting on the first center point data to obtain first fitted center point data corresponding to the first center point data includes:
denoising the first central point data based on a denoising algorithm to obtain denoised first central point data;
and performing data fitting on the denoised first central point data to obtain first fitting central point data.
3. The method for recording a movement locus of an operating device according to claim 2, wherein the target image data includes a current time corresponding to the first center point data, the first center point data includes center point horizontal data and center point longitudinal data, and the data fitting is performed on the denoised first center point data to obtain first fitting center point data, including:
performing data fitting on the central point transverse data in the denoised first central point data based on the current moment to obtain a transverse fitting function corresponding to the central point transverse data;
calculating to obtain transverse fitting data based on the current time and the transverse fitting function;
performing data fitting on the central point longitudinal data in the denoised first central point data based on the current moment to obtain a longitudinal fitting function corresponding to the central point longitudinal data;
calculating to obtain longitudinal fitting data based on the current time and the longitudinal fitting function;
and determining the transverse fitting data and the longitudinal fitting data as first fitting central point data corresponding to the first central point data.
4. The movement track recording method of an operating device according to claim 3, wherein performing an interpolation operation on the first fitting center point data to obtain second center point data corresponding to the first center point data at the target position includes:
calculating to obtain a transverse smooth spline curve coefficient based on the current moment, the central point transverse data in the first fitting central point data and a preset parameter;
calculating to obtain a longitudinal smooth spline curve coefficient based on the current moment, the central point longitudinal data in the first fitting central point data and the preset parameter;
selecting a maximum time and a minimum time from current times contained in the target image data;
constructing time data based on the maximum time and the minimum time;
performing interpolation operation on the transverse data of the central point based on the time data and the transverse smooth spline curve coefficient to obtain transverse data of the interpolated central point;
performing interpolation operation on the longitudinal data of the central point based on the time data and the longitudinal smooth spline curve coefficient to obtain the longitudinal data of the interpolated central point;
and obtaining second central point data corresponding to the first central point data at the target position based on the transverse data of the interpolation central point and the longitudinal data of the interpolation central point.
5. The method according to claim 1, wherein the step of updating second center point data corresponding to the target of the operating device identified by the target image data based on the second center point data corresponding to the reference object, and calculating the movement trajectory of the operating device based on the updated second center point data corresponding to the target of the operating device comprises:
updating second center point data corresponding to targets except the reference object in the target image data based on the second center point data corresponding to the reference object to obtain updated moving center point data corresponding to the targets;
and calculating to obtain the movement track of the operating equipment based on the updated movement center point data corresponding to the target of the operating equipment in the target image data.
6. The method for recording the movement track of the operating device according to claim 5, wherein the object other than the reference object is at least one of the operating device, a capturing device for capturing original image data, and a corresponding operating object at a target position in the target image data.
7. The method for recording a movement locus of an operating device according to claim 6, wherein when the target is an operating device, updating second center point data corresponding to a target other than the reference object in the target image data based on the second center point data corresponding to the reference object to obtain updated movement center point data corresponding to the target, comprises:
acquiring second central point data corresponding to the reference object and second central point data corresponding to the operating equipment from the target image data;
and updating second center point data corresponding to the operating equipment based on the second center point data corresponding to the reference object and the second center point data corresponding to the operating equipment to obtain updated moving center point data corresponding to the operating equipment.
8. The method for recording a movement locus of an operating device according to claim 6, wherein when the target is an acquisition device, updating second center point data corresponding to a target other than the reference object in the target image data based on the second center point data corresponding to the reference object to obtain updated movement center point data corresponding to the target, comprises:
acquiring second central point data corresponding to the reference object and second central point data corresponding to the acquisition equipment from the target image data; acquiring the width and the height of the target image data;
and updating the second central point data corresponding to the acquisition equipment based on the second central point data corresponding to the reference object, the second central point data corresponding to the acquisition equipment and the width and height of the target image data to obtain updated moving central point data corresponding to the acquisition equipment.
9. The method for recording a movement locus of an operating device according to claim 6, wherein when the object is an operating object, updating second center point data corresponding to an object other than the reference object in the object image data based on the second center point data corresponding to the reference object to obtain updated movement center point data corresponding to the object, the method comprising:
acquiring second central point data corresponding to the reference object and second central point data corresponding to the operation target from the target image data;
and updating second center point data corresponding to the operation target based on the second center point data corresponding to the reference object and the second center point data corresponding to the operation target to obtain updated moving center point data corresponding to the operation target.
10. The method for recording the movement track of the operating device according to claim 1, wherein acquiring bounding box data that is identified at the target position in the target image data includes:
and acquiring bounding box data which is identified at the target position in the target image data based on the type mapping dictionary.
11. The movement trace recording method according to claim 10, before acquiring bounding box data identified at the target position in target image data based on a type mapping dictionary, the method further comprising:
performing target detection on original image data of the operating equipment through an example segmentation model to obtain target image data of a rectangular surrounding frame displayed at the target position and/or a polygonal surrounding frame displayed at the target position in the original image data;
constructing a first type mapping sub-dictionary and/or a second type mapping sub-dictionary based on traversing a rectangular bounding box displayed at the target position and/or a polygonal bounding box displayed at the target position in the target image data in a time sequence;
generating a type mapping dictionary based on the first type mapping sub-dictionary and/or the second type mapping sub-dictionary.
12. The movement trace recording method according to claim 11, wherein the constructing a first-type mapping sub-dictionary and/or a second-type mapping sub-dictionary based on a time-series traversal of a rectangular bounding box displayed at the target position and/or a polygonal bounding box displayed at the target position in the target image data includes:
traversing a rectangular surrounding frame displayed at the target position in the target image data based on time sequence, and constructing a first type mapping sub-dictionary containing surrounding frame data of the rectangular surrounding frame;
and/or traversing a polygon bounding box displayed at the target position in the target image data based on time sequence, and constructing a second type mapping sub-dictionary containing bounding box data of the polygon bounding box.
13. The movement trace recording method according to claim 12, wherein the constructing a first type mapping sub-dictionary containing bounding box data of the rectangular bounding box based on time-series traversal of the rectangular bounding box displayed at the target position in the target image data comprises:
acquiring first detection content data corresponding to a rectangular surrounding frame displayed at the target position based on time sequence from the target image data; the first detection content data at least comprises a first target type of a target identified by the rectangular bounding box, a first current time and first boundary data; the first boundary data is bounding box data of the rectangular bounding box;
traversing the first detection content data based on time sequence to construct a first type mapping sub-dictionary; the first type mapping sub-dictionary at least comprises a first time mapping dictionary, the first time mapping dictionary is in one-to-one correspondence with the first target type, and the first target types corresponding to any two first time mapping dictionaries are different.
14. The movement trace recording method of an operating device according to claim 13, constructing a first type mapping sub-dictionary based on time-series traversal of the first detected content data, comprising:
traversing the first detection content data based on time sequence to construct a first time mapping dictionary, wherein the first time mapping dictionary comprises at least one time key value pair of first current time and first boundary data, and the first current time and the first boundary data in the time key value pair are obtained from the same first detection content data;
and constructing a first type mapping sub-dictionary based on the first detection content data and the first time mapping dictionary, wherein the first type mapping sub-dictionary comprises at least one first target type and a type key value pair of the first time mapping dictionary, and the first target type and a first current time and first boundary data in the first time mapping dictionary belong to the same first detection content data.
15. The movement trace recording method according to claim 12, wherein the step of constructing a second type mapping sub-dictionary containing bounding box data of the polygon bounding box based on a time-series traversal of the polygon bounding box displayed at the target position in the target image data comprises:
acquiring second detection content data corresponding to a polygon bounding box displayed at the target position based on time sequence from the target image data; the second detection content data at least comprises a second target type of the target identified by the polygon bounding box, a second current time and second boundary data; the second boundary data is bounding box data of the polygon bounding box;
compressing the second boundary data in the second detection content data to obtain compressed second detection content data;
calculating to obtain approximate boundary data corresponding to second boundary data based on the second boundary data in the compressed second detection content data;
constructing a second type mapping sub-dictionary based on time-series traversal of the second detection content data and the approximate boundary data; the second type mapping sub-dictionary at least comprises a second time mapping dictionary, the second time mapping dictionary is in one-to-one correspondence with the second target type, and the second target types corresponding to any two second time mapping dictionaries are different.
16. The movement trace recording method of an operating device according to claim 15, constructing a second type mapping sub-dictionary based on time-sequentially traversing the second detected content data and the approximate boundary data, comprising:
traversing the second detection content data and the approximate boundary data based on time sequence to construct a second time mapping dictionary, wherein the second time mapping dictionary comprises at least one time key value pair of a second current time and the approximate boundary data, and the second current time and the approximate boundary data in the time key value pair are obtained from the same second detection content data;
and constructing a second type mapping sub-dictionary based on the second detection content data and the second time mapping dictionary, wherein the second type mapping sub-dictionary comprises at least one type key-value pair of a second target type and the second time mapping dictionary, and the second target type and second current time and approximate boundary data in the second time mapping dictionary belong to the same second detection content data.
17. The movement trace recording method according to claim 15 or 16, wherein compressing the second boundary data in the second detected content data to obtain compressed second detected content data includes:
compressing the second boundary data of the second detection content data to obtain one-dimensional compressed boundary data corresponding to the second boundary data;
reducing the one-dimensional compressed boundary data to obtain one-dimensional reduced boundary data corresponding to the one-dimensional compressed boundary data;
calculating the one-dimensional reduction boundary data to obtain compressed second boundary data;
and updating the second detection content through the compressed second boundary data to obtain compressed second detection content data.
18. The movement track recording method of an operating device according to claim 17, wherein compressing the second boundary data of the second detected content data to obtain one-dimensional compressed boundary data corresponding to the second boundary data, includes:
acquiring the width and the height of the target image data, and determining a compression step length;
calculating to obtain an initialization length based on the width and the height of the target image data and the compression step length;
creating one-dimensional compression boundary data corresponding to the second boundary data, wherein the length of the one-dimensional compression boundary data is the initialization length;
performing compression calculation on the second boundary data of the second detection content data based on the compression step and the width of the original image data to obtain a first index value corresponding to the second boundary data;
and calculating to obtain the compression boundary data corresponding to the first index value in the one-dimensional compression boundary data based on the second boundary data and the first index value.
19. The method for recording a movement trajectory of an operating device according to claim 18, wherein the restoring the one-dimensional compressed boundary data to obtain one-dimensional restored boundary data corresponding to the one-dimensional compressed boundary data includes:
creating one-dimensional reduction boundary data corresponding to the one-dimensional compression boundary data, wherein the length of the one-dimensional reduction boundary data is the product of the width and the height of the original image data;
reducing and calculating the one-dimensional compression boundary data based on the compression step length and the width of the target image data to obtain a second index value corresponding to the one-dimensional compression boundary data;
and calculating to obtain reduction boundary data corresponding to the second index value in the one-dimensional reduction boundary data based on the one-dimensional compression boundary data and the second index value.
20. The method for recording a movement trace of an operating device according to claim 19, wherein the calculating the one-dimensional restored boundary data to obtain compressed second boundary data includes:
traversing the one-dimensional reduction boundary data based on time sequence to obtain index data, wherein the one-dimensional reduction boundary data corresponding to the index data is larger than a preset value;
calculating to obtain compressed transverse moving data and longitudinal moving data based on the index data and the one-dimensional reduction boundary data;
and generating compressed second boundary data through the transverse movement data and the longitudinal movement data.
21. The movement trace recording method according to claim 15 or 16, wherein the step of calculating, based on second boundary data in the compressed second detected content data, approximate boundary data corresponding to the second boundary data includes:
acquiring vertex movement data of a target corresponding to each second current moment from second boundary data of the compressed second detection content data; calculating a distance of each of the second boundary data from the vertex movement data;
and calculating to obtain approximate boundary data corresponding to the second boundary data based on the second boundary data with the minimum distance.
22. The method for recording a movement locus of an operating device according to claim 21, wherein one of the objects corresponds to four vertex movement data, and based on the second boundary data with the minimum distance, approximate boundary data corresponding to the second boundary data is calculated, and the method includes:
based on the second boundary data with the minimum distance, four boundary data corresponding to the target in the second boundary data are obtained through calculation;
and when the four pieces of boundary data are all different data or three pieces of boundary data are different in the four pieces of boundary data, determining the four pieces of boundary data as approximate boundary data.
23. A movement trace recording apparatus that operates a device, comprising:
the acquisition unit is used for acquiring bounding box data which is marked at the target position in the target image data;
the calculation unit is used for calculating to obtain first central point data at the target position based on the bounding box data;
the operation unit is used for performing data fitting and interpolation operation on the first central point data to obtain second central point data corresponding to the first central point data at the target position;
the selecting unit is used for selecting a reference object from the target identified by the target image data;
the updating unit is used for updating second central point data corresponding to the target of the operating equipment identified by the target image data based on the second central point data corresponding to the reference object, and calculating the moving track of the operating equipment based on the updated second central point data corresponding to the target of the operating equipment;
wherein the operation unit includes:
the fitting subunit is configured to perform data fitting on the first central point data to obtain first fitting central point data corresponding to the first central point data;
and the interpolation subunit is used for carrying out interpolation operation on the first fitting center point data to obtain second center point data corresponding to the first center point data at the target position.
24. The movement trace recording device of the operation device according to claim 23, the fitting subunit comprising:
the denoising module is used for carrying out denoising operation on the first central point data based on a denoising algorithm to obtain denoised first central point data;
and the fitting module is used for performing data fitting on the denoised first central point data to obtain first fitting central point data.
25. The device for recording moving trajectory of operating equipment according to claim 24, wherein the target image data includes a current time corresponding to the first center point data, the first center point data includes center point horizontal data and center point vertical data, and the fitting module includes:
the fitting submodule is used for performing data fitting on the central point transverse data in the denoised first central point data based on the current moment to obtain a transverse fitting function corresponding to the central point transverse data;
the first calculation submodule is used for calculating to obtain transverse fitting data based on the current time and the transverse fitting function;
the fitting submodule is further configured to perform data fitting on the central point longitudinal data in the first denoised central point data based on the current time to obtain a longitudinal fitting function corresponding to the central point longitudinal data;
the first calculation submodule is further used for calculating to obtain longitudinal fitting data based on the current time and the longitudinal fitting function;
and the determining submodule is used for determining the transverse fitting data and the longitudinal fitting data as first fitting central point data corresponding to the first central point data.
26. The movement trace recording device of the operation device according to claim 25, the interpolation subunit comprising:
the first calculation module is used for calculating to obtain a transverse smooth spline curve coefficient based on the current moment, the central point transverse data in the first fitting central point data and a preset parameter;
the first calculation module is further configured to calculate a longitudinal smooth spline curve coefficient based on the current time, the central point longitudinal data in the first fitting central point data, and the preset parameter;
the selecting module is used for selecting the maximum time and the minimum time from the current time contained in the target image data;
a first construction module for constructing time data based on the maximum time and the minimum time;
the interpolation module is used for carrying out interpolation operation on the transverse data of the central point based on the time data and the transverse smooth spline curve coefficient to obtain transverse data of the interpolated central point;
the interpolation module is further configured to perform interpolation operation on the longitudinal data of the central point based on the time data and the longitudinal smooth spline curve coefficient to obtain longitudinal data of an interpolated central point;
and the first updating module is used for obtaining second central point data corresponding to the first central point data at the target position based on the transverse data of the interpolation central point and the longitudinal data of the interpolation central point.
27. The movement trace recording apparatus of the operation device according to claim 23, the updating unit comprising:
an updating subunit, configured to update second center point data corresponding to a target other than the reference object in the target image data based on the second center point data corresponding to the reference object, to obtain updated moving center point data corresponding to the target;
and the calculating subunit is used for calculating the movement track of the operating equipment based on the updated movement center point data corresponding to the target type of the operating equipment in the target image data.
28. The movement trace recording device of the operation device according to claim 27, wherein the object other than the reference object is at least one of the operation device, an acquisition device for acquiring original image data, and an operation object corresponding to a target position in the target image data.
29. The movement trace recording apparatus of the operation device according to claim 28, when the target type is an operation device, the updating subunit includes:
the first acquisition module is used for acquiring second central point data corresponding to the reference object and second central point data corresponding to the operating equipment from the target image data;
and the second updating module is used for updating second center point data corresponding to the operating equipment based on the second center point data corresponding to the reference object and the second center point data corresponding to the operating equipment to obtain updated moving center point data corresponding to the operating equipment.
30. The movement trace recording apparatus of an operating device according to claim 28, when the target type is an acquisition device, the updating subunit further includes:
the second acquisition module is used for acquiring second central point data corresponding to the reference object and second central point data corresponding to the acquisition equipment from the target image data;
the second obtaining module is further configured to obtain the width and height of the target image data;
and the third updating module is used for updating the second central point data corresponding to the acquisition equipment based on the second central point data corresponding to the reference object, the second central point data corresponding to the acquisition equipment and the width and height of the target image data to obtain updated moving central point data corresponding to the acquisition equipment.
31. The movement trace recording device of the operation apparatus according to claim 29, when the object type is an operation object, the updating subunit further includes:
a third obtaining module, configured to obtain, from the target image data, second center point data corresponding to the reference object and second center point data corresponding to the operation target;
and the fourth updating module is used for updating the second center point data corresponding to the operation target based on the second center point data corresponding to the reference object and the second center point data corresponding to the operation target to obtain the updated moving center point data corresponding to the operation target.
32. The movement track recording device of the operating device according to claim 23, wherein the manner of acquiring the bounding box data identified at the target position in the target image data by the acquiring unit is specifically as follows:
and acquiring bounding box data which is identified at the target position in the target image data based on the type mapping dictionary.
33. The movement trace recording apparatus of the operation device according to claim 32, further comprising:
the detection unit is used for carrying out target detection on original image data of the operating equipment through an example segmentation model before the acquisition unit acquires bounding box data which is identified at the target position in the target image data based on a type mapping dictionary, so as to obtain target image data of a rectangular bounding box displayed at the target position and/or a polygonal bounding box displayed at the target position in the original image data;
the construction unit is used for traversing a rectangular surrounding frame displayed at the target position and/or a polygonal surrounding frame displayed at the target position in the target image data based on time sequence to construct a first type mapping sub-dictionary and/or a second type mapping sub-dictionary;
a generating unit, configured to generate a type mapping dictionary based on the first type mapping sub-dictionary and/or the second type mapping sub-dictionary.
34. The movement trace recording device of the operation device according to claim 33, the constructing unit comprising:
the first construction subunit is configured to traverse a rectangular bounding box displayed at the target position in the target image data based on a time sequence, and construct a first type mapping sub-dictionary containing bounding box data of the rectangular bounding box;
and/or a second construction subunit, configured to traverse a polygon bounding box displayed at the target position in the target image data based on a time sequence, and construct a second type mapping sub-dictionary including bounding box data of the polygon bounding box.
35. The movement trace recording device of the operation device according to claim 34, said first constructing subunit comprising:
a fourth obtaining module, configured to obtain, from the target image data, time-series-based first detection content data corresponding to a rectangular bounding box displayed at the target position; the first detection content data at least comprises a first target type of a target identified by the rectangular bounding box, a first current time and first boundary data; the first boundary data is bounding box data of the rectangular bounding box;
the second construction module is used for traversing the first detection content data based on time sequence and constructing a first type mapping sub-dictionary; the first type mapping sub-dictionary at least comprises a first time mapping dictionary, the first time mapping dictionary is in one-to-one correspondence with the first target type, and the first target types corresponding to any two first time mapping dictionaries are different.
36. The movement trace recording device of the operation device according to claim 35, the second building block comprising:
the first construction submodule is used for traversing the first detection content data based on time sequence and constructing a first time mapping dictionary, wherein the first time mapping dictionary comprises at least one time key value pair of first current time and first boundary data, and the first current time and the first boundary data in the time key value pair are obtained from the same first detection content data;
the first constructing sub-module is further configured to construct a first type mapping sub-dictionary based on the first detection content data and the first time mapping dictionary, where the first type mapping sub-dictionary includes at least one type key-value pair of a first target type and the first time mapping dictionary, and the first target type and a first current time and first boundary data in the first time mapping dictionary belong to a same first detection content data.
37. The movement trace recording device of the operation device according to claim 34, the second constructing subunit comprising:
a fifth obtaining module, configured to obtain, from the target image data, second detection content data corresponding to a polygon bounding box displayed at the target position based on a time sequence; the second detection content data at least comprises a second target type of the target identified by the polygon bounding box, a second current time and second boundary data; the second boundary data is bounding box data of the polygon bounding box;
the compression module is used for compressing the second boundary data in the second detection content data to obtain compressed second detection content data;
the second calculation module is used for calculating to obtain approximate boundary data corresponding to second boundary data based on the second boundary data in the compressed second detection content data;
the third construction module is used for traversing the second detection content data and the approximate boundary data based on time sequence to construct a second type mapping sub-dictionary; the second type mapping sub-dictionary at least comprises a second time mapping dictionary, the second time mapping dictionary is in one-to-one correspondence with the second target type, and the second target types corresponding to any two second time mapping dictionaries are different.
38. The movement trace recording apparatus of the operation device according to claim 37, the third building block comprising:
a second constructing sub-module, configured to traverse the second detection content data and the approximate boundary data based on a time sequence, and construct a second time mapping dictionary, where the second time mapping dictionary includes at least one time key value pair of a second current time and the approximate boundary data, and the second current time and the approximate boundary data in the time key value pair are obtained from the same second detection content data;
the second constructing sub-module is further configured to construct a second type mapping sub-dictionary based on the second detection content data and the second time mapping dictionary, where the second type mapping sub-dictionary includes at least one type key-value pair of a second target type and the second time mapping dictionary, and the second target type and a second current time and approximate boundary data in the second time mapping dictionary belong to the same second detection content data.
39. The movement trace recording apparatus of the operation device according to claim 37 or 38, the compression module comprising:
the compression submodule is used for compressing the second boundary data of the second detection content data to obtain one-dimensional compression boundary data corresponding to the second boundary data;
the restoring submodule is used for restoring the one-dimensional compressed boundary data to obtain one-dimensional restored boundary data corresponding to the one-dimensional compressed boundary data;
the second calculation submodule is used for calculating the one-dimensional reduction boundary data to obtain compressed second boundary data;
and the updating submodule is used for updating the second detection content through the compressed second boundary data to obtain compressed second detection content data.
40. The movement trace recording device of the operation device according to claim 39, the compression sub-module comprising:
the acquisition structure is used for acquiring the width and the height of the target image data and determining the compression step length;
the first calculation structure is used for calculating to obtain an initialization length based on the width and the height of the target image data and the compression step length;
a first creating structure, configured to create one-dimensional compression boundary data corresponding to the second boundary data, where a length of the one-dimensional compression boundary data is the initialization length;
the compression structure is used for carrying out compression calculation on the second boundary data of the second detection content data based on the compression step and the width of the original image data to obtain a first index value corresponding to the second boundary data;
the first calculation structure is further configured to calculate, based on the second boundary data and the first index value, to obtain compressed boundary data corresponding to the first index value in the one-dimensional compressed boundary data.
41. The movement trace recording device of the operation device according to claim 40, the restoration sub-module includes:
a second creating structure, configured to create one-dimensional reduction boundary data corresponding to the one-dimensional compression boundary data, where a length of the one-dimensional reduction boundary data is a product of a width and a height of the original image data;
the reducing structure is used for carrying out reducing calculation on the one-dimensional compression boundary data based on the compression step length and the width of the target image data to obtain a second index value corresponding to the one-dimensional compression boundary data;
and the second calculation structure is used for calculating to obtain the restoration boundary data corresponding to the second index value in the one-dimensional restoration boundary data based on the one-dimensional compression boundary data and the second index value.
42. The movement trace recording device of the operation device according to claim 41, the second calculation sub-module includes:
the traversal structure is used for traversing the one-dimensional reduction boundary data based on time sequence to obtain index data, and the one-dimensional reduction boundary data corresponding to the index data is larger than a preset value;
the third calculation structure is used for calculating and obtaining compressed transverse moving data and longitudinal moving data based on the index data and the one-dimensional reduction boundary data;
a generating structure for generating compressed second boundary data from the lateral movement data and the longitudinal movement data.
43. The movement trace recording device of the operation device according to claim 37 or 38, the second calculation module comprising:
the obtaining submodule is used for obtaining vertex moving data of the target corresponding to each second current moment from second boundary data of the compressed second detection content data;
a third calculation submodule for calculating a distance of each of the second boundary data from the vertex movement data;
and the fourth calculation submodule is used for calculating to obtain approximate boundary data corresponding to the second boundary data based on the second boundary data with the minimum distance.
44. The movement trace recording device of an operating apparatus according to claim 43, wherein one of said objects corresponds to four of said vertex movement data, and said fourth calculation sub-module includes:
a fourth calculation structure, configured to calculate, based on the second boundary data with the minimum distance, four boundary data corresponding to the target in the second boundary data;
a determination structure configured to determine the four pieces of boundary data as approximate boundary data when the four pieces of boundary data are all different data or three pieces of boundary data are different from each other among the four pieces of boundary data.
45. A clinical artificial intelligence assistance system that performs the movement trace recording method of the operation device of any one of claims 1 to 22.
46. A storage medium storing a program, wherein the storage medium stores a computer program which, when executed by a processor, implements a movement trace recording method of an operating device according to any one of claims 1 to 22.
47. A computing device comprising the storage medium of claim 46.
CN202110658829.0A 2021-06-15 2021-06-15 Movement track recording method, device, medium and computing equipment of operating equipment Active CN113345046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110658829.0A CN113345046B (en) 2021-06-15 2021-06-15 Movement track recording method, device, medium and computing equipment of operating equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110658829.0A CN113345046B (en) 2021-06-15 2021-06-15 Movement track recording method, device, medium and computing equipment of operating equipment

Publications (2)

Publication Number Publication Date
CN113345046A CN113345046A (en) 2021-09-03
CN113345046B true CN113345046B (en) 2022-04-08

Family

ID=77477005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110658829.0A Active CN113345046B (en) 2021-06-15 2021-06-15 Movement track recording method, device, medium and computing equipment of operating equipment

Country Status (1)

Country Link
CN (1) CN113345046B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299112B (en) * 2021-12-24 2023-01-13 萱闱(北京)生物科技有限公司 Multi-target-based track identification method, device, medium and computing equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1779718A (en) * 2004-11-18 2006-05-31 中国科学院自动化研究所 Visula partitioned drawing device and method for virtual endoscope

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8315689B2 (en) * 2007-09-24 2012-11-20 MRI Interventions, Inc. MRI surgical systems for real-time visualizations using MRI image data and predefined data of surgical tools
CN110946651A (en) * 2013-08-13 2020-04-03 波士顿科学国际有限公司 Computer visualization of anatomical items
CN106504270B (en) * 2016-11-08 2019-12-20 浙江大华技术股份有限公司 Method and device for displaying target object in video
US10932860B2 (en) * 2017-04-28 2021-03-02 The Brigham And Women's Hospital, Inc. Systems, methods, and media for presenting medical imaging data in an interactive virtual reality environment
US20200160060A1 (en) * 2018-11-15 2020-05-21 International Business Machines Corporation System and method for multiple object tracking
EP3726469A1 (en) * 2019-04-17 2020-10-21 Siemens Healthcare GmbH Automatic motion detection in medical image-series
WO2021026705A1 (en) * 2019-08-09 2021-02-18 华为技术有限公司 Matching relationship determination method, re-projection error calculation method and related apparatus
CN111652072A (en) * 2020-05-08 2020-09-11 北京嘀嘀无限科技发展有限公司 Track acquisition method, track acquisition device, storage medium and electronic equipment
CN112434684B (en) * 2021-01-27 2021-04-27 萱闱(北京)生物科技有限公司 Image display method, medium, device and computing equipment based on target detection

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1779718A (en) * 2004-11-18 2006-05-31 中国科学院自动化研究所 Visula partitioned drawing device and method for virtual endoscope

Also Published As

Publication number Publication date
CN113345046A (en) 2021-09-03

Similar Documents

Publication Publication Date Title
US9536316B2 (en) Apparatus and method for lesion segmentation and detection in medical images
CN109584302B (en) Camera pose optimization method, camera pose optimization device, electronic equipment and computer readable medium
JP6564018B2 (en) Radiation image lung segmentation technology and bone attenuation technology
CN108648178A (en) A kind of method and device of image nodule detection
CN108319976A (en) Build drawing method and device
US20170024930A1 (en) Consistent tessellation via topology-aware surface tracking
US20160259898A1 (en) Apparatus and method for providing reliability for computer aided diagnosis
CN113345046B (en) Movement track recording method, device, medium and computing equipment of operating equipment
JP6859472B1 (en) Object tracking device, object tracking method and program
CN113887545B (en) Laparoscopic surgical instrument identification method and device based on target detection model
CN113160199B (en) Image recognition method and device, computer equipment and storage medium
CN104778743A (en) Apparatus and computer-implemented method for generating a three-dimensional scene and non-transitory tangible computer readable medium thereof
JP2020522056A (en) Surface-based object identification
JP2019106008A (en) Estimation device, estimation method, and estimation program
CN116977352A (en) Plaque segmentation method, plaque segmentation apparatus and computer readable storage medium
CN113129340B (en) Motion trajectory analysis method and device for operating equipment, medium and computing equipment
CN113823419A (en) Operation process recording method, device, medium and computing equipment
EP4193589B1 (en) Real time augmentation
CN113592768A (en) Rib fracture detection method, rib fracture detection information display method and system
US10713539B2 (en) Recording medium, case data generation support system, and case data generation support method
CN113343999B (en) Target boundary recording method and device based on target detection and computing equipment
CN114299112B (en) Multi-target-based track identification method, device, medium and computing equipment
CN111127635B (en) Target object detection method, device, equipment and storage medium
CN112037336B (en) Adjacent point segmentation method and device
KR20200021714A (en) Method and apparatus for analysing medical image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant