CN113343999B - Target boundary recording method and device based on target detection and computing equipment - Google Patents

Target boundary recording method and device based on target detection and computing equipment Download PDF

Info

Publication number
CN113343999B
CN113343999B CN202110658862.3A CN202110658862A CN113343999B CN 113343999 B CN113343999 B CN 113343999B CN 202110658862 A CN202110658862 A CN 202110658862A CN 113343999 B CN113343999 B CN 113343999B
Authority
CN
China
Prior art keywords
data
boundary
target
boundary data
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110658862.3A
Other languages
Chinese (zh)
Other versions
CN113343999A (en
Inventor
乔元风
曾凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Xuanwei Digital Medical Technology Co ltd
Xuanwei Beijing Biotechnology Co ltd
Original Assignee
Henan Xuan Yongtang Medical Information Technology Co ltd
Xuanwei Beijing Biotechnology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Xuan Yongtang Medical Information Technology Co ltd, Xuanwei Beijing Biotechnology Co ltd filed Critical Henan Xuan Yongtang Medical Information Technology Co ltd
Priority to CN202110658862.3A priority Critical patent/CN113343999B/en
Publication of CN113343999A publication Critical patent/CN113343999A/en
Application granted granted Critical
Publication of CN113343999B publication Critical patent/CN113343999B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information

Abstract

The embodiment of the invention provides a target boundary recording method and device based on target detection and a computing device. The method comprises the following steps: performing target detection on the original image data through the example segmentation model to obtain first image data displaying a first preset form identifier at a target position in the original image data and obtain second image data displaying a second preset form identifier at the target position; acquiring first boundary data of the first image data, and storing the first boundary data into a first boundary data storage file corresponding to a target type of a target; and acquiring second boundary data of the second image data, and storing the second boundary data into a second boundary data storage file corresponding to the target type of the target. The method and the device can automatically identify the boundary of the target in the original image, and can automatically store the identified boundary data in association with the target type corresponding to the target, so that the efficiency of marking the image is improved.

Description

Target boundary recording method and device based on target detection and computing equipment
Technical Field
The embodiment of the invention relates to the field of artificial intelligence, in particular to a target boundary recording method and device based on target detection and a computing device.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
In recent years, in order to improve the system target detection accuracy, a deep learning-based neural network model can be used for target detection on images or videos, but before the neural network model is put into use, the neural network model needs to be trained by using labeled pictures. Currently, a range mask of a target in a picture can be identified through semantic segmentation, so that segmentation of the target in the picture is realized based on the range mask. However, in practice, it is found that in order to ensure the accuracy of labeling the segmented picture, the picture generally needs to be labeled manually, and the process is complex and the operation difficulty is high, so that the efficiency of labeling the picture is low.
Disclosure of Invention
In this context, embodiments of the present invention are intended to provide an object boundary recording method, apparatus and computing device based on object detection.
In a first aspect of embodiments of the present invention, there is provided a target boundary recording method based on target detection, including:
performing target detection on original image data through an example segmentation model to obtain first image data displaying a first preset form identifier at a target position in the original image data and obtain second image data displaying a second preset form identifier at the target position;
acquiring first boundary data of the first image data, and storing the first boundary data into a first boundary data storage file corresponding to the target type of the target;
and acquiring second boundary data of the second image data, and storing the second boundary data into a second boundary data storage file corresponding to the target type of the target.
In an embodiment of the present invention, the acquiring the first boundary data of the first image data and storing the first boundary data in a first boundary data storage file corresponding to a target type of the target, where the original image data is time-series-based image data, includes:
acquiring first detection content data based on time sequence from the first image data; the first detection content data at least comprises a target type of a target identified by the first image data, a current moment and first boundary data;
traversing the first detection content data based on time sequence to construct a type mapping dictionary; the type mapping dictionary comprises at least one time mapping dictionary, the time mapping dictionaries correspond to target types one by one, and the target types corresponding to any two time mapping dictionaries are different;
and storing the first boundary data in the first detection content data into a first boundary data storage file corresponding to the target type of the target based on the type mapping dictionary.
In an embodiment of the present embodiment, traversing the first detection content data based on time sequence to construct a type mapping dictionary includes:
traversing the first detection content data based on time sequence to construct a time mapping dictionary, wherein the time mapping dictionary comprises at least one time key value pair of the current time and first boundary data, and the current time and the first boundary data in the time key value pair are obtained from the same first detection content data;
and constructing a type mapping dictionary based on the first detection content data and the time mapping dictionary, wherein the type mapping dictionary comprises at least one target type and a type key value pair of the time mapping dictionary, and the target type and the current time and the first boundary data in the time mapping dictionary belong to the same first detection content data.
In an embodiment of the present embodiment, after constructing a type mapping dictionary based on the first detection content data and the time mapping dictionary, the method further includes:
when detecting that a time key value pair corresponding to a plurality of first boundary data at the current moment exists in the time mapping dictionary, calculating to obtain an average value of the plurality of first boundary data;
determining a time-key value pair of the current time and the average value based on the average value.
In an embodiment of the present invention, the first boundary data includes a horizontal numerical value, a vertical numerical value, an image width and an image height corresponding to the first image data.
In an embodiment of the present invention, acquiring second boundary data of the second image data, and storing the second boundary data in a second boundary data storage file corresponding to a target type of the target location includes:
acquiring a mask image from the second image data;
performing image processing on the mask image to obtain second boundary data;
and storing the second boundary data into a second boundary data storage file corresponding to the target type of the target position.
In an embodiment of the present invention, the image processing the mask image to obtain second boundary data includes:
carrying out gray scale transformation on the mask image to obtain a gray scale mask image corresponding to the mask image;
calculating the gray mask image through a multi-level edge detection algorithm to obtain mask boundary data;
and carrying out binarization processing on the mask boundary data to obtain second boundary data.
In a second aspect of embodiments of the present invention, there is provided an object boundary recording apparatus based on object detection, comprising:
the detection unit is used for carrying out target detection on original image data through an example segmentation model to obtain first image data displaying a first preset form identifier at a target position in the original image data and obtain second image data displaying a second preset form identifier at the target position;
the first acquisition unit is used for acquiring first boundary data of the first image data and storing the first boundary data into a first boundary data storage file corresponding to the target type of the target;
and the second acquisition unit is used for acquiring second boundary data of the second image data and storing the second boundary data into a second boundary data storage file corresponding to the target type of the target.
In a third aspect of embodiments of the present invention, there is provided a clinical artificial intelligence assistance system for performing the method of any one of the first aspect.
In a fourth aspect of embodiments of the present invention, there is provided a computer-readable storage medium storing a computer program enabling, when executed by a processor, the method of any one of the first aspect.
In a fifth aspect of embodiments of the present invention, there is provided a computing device comprising the storage medium of the fourth aspect.
According to the target boundary recording method, device and computing equipment based on target detection of the embodiment of the invention, target detection can be performed on original image data through an example segmentation model to obtain the position of a target in the original image data, and the position of the target can be marked through a first preset form and a second preset form to obtain first image data corresponding to the first preset form and second image data corresponding to the second preset form, so that first boundary data of the first image data and second boundary data of the second image data can be identified and obtained, and further the first boundary data and the second boundary data can be respectively associated and stored with corresponding target types, therefore, the invention can automatically perform boundary identification on the target in the original image and can automatically associate and store the identified boundary data with the target type corresponding to the target, the efficiency of marking the picture is promoted.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
fig. 1 is a schematic flowchart of a target boundary recording method based on target detection according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a target boundary recording method based on target detection according to another embodiment of the present invention;
FIG. 3 is a structural diagram of a first boundary data storage file according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a file naming format of a first boundary data storage file according to an embodiment of the present invention;
FIG. 5 is a mask image in an embodiment of the invention;
FIG. 6 is a gray mask map generated according to an embodiment of the present invention;
FIG. 7 is a mask boundary diagram generated according to an embodiment of the present invention;
FIG. 8 is a binarized mask boundary map generated according to an embodiment of the present invention;
FIG. 9 is a structural diagram of a second boundary data storage file according to an embodiment of the present invention;
FIG. 10 is a schematic interface diagram of a clinical artificial intelligence assistance system according to an embodiment of the present invention;
FIG. 11 is a schematic view of an operator inspection interface output by a clinical artificial intelligence assistance system in accordance with an embodiment of the present invention;
FIG. 12 is a schematic structural diagram of an object boundary recording apparatus based on object detection according to an embodiment of the present invention;
FIG. 13 schematically illustrates a schematic structural view of a medium according to an embodiment of the present invention;
fig. 14 schematically shows a structural diagram of a computing device according to an embodiment of the present invention.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the invention, and are not intended to limit the scope of the invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to the embodiment of the invention, an object boundary recording method, an object boundary recording device and a computing device based on object detection are provided.
In this document, it is to be understood that any number of elements in the figures are provided by way of illustration and not limitation, and any nomenclature is used for differentiation only and not in any limiting sense.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments of the invention.
Exemplary method
Referring to fig. 1, fig. 1 is a schematic flowchart of a target boundary recording method based on target detection according to an embodiment of the present invention. It should be noted that the embodiments of the present invention can be applied to any applicable scenarios.
Fig. 1 shows a flowchart of a target boundary recording method based on target detection according to an embodiment of the present invention, including:
step S101, performing target detection on original image data through an instance segmentation model to obtain first image data displaying a first preset form identifier at a target position in the original image data and obtain second image data displaying a second preset form identifier at the target position;
step S102, acquiring first boundary data of the first image data, and storing the first boundary data into a first boundary data storage file corresponding to the target type of the target;
step S103, obtaining second boundary data of the second image data, and storing the second boundary data into a second boundary data storage file corresponding to the target type of the target.
The target boundary recording method based on target detection is applied to application scenes with limited vision or complex operating environments and the like, the target detection is carried out on image data collected in the application scenes, and the target boundary recording is carried out on the detected target, wherein the application scenes include but are not limited to operating rooms, inspection rooms, building holes, mechanical inspection scenes and the like.
The invention can carry out target detection on the original image data through the example segmentation model to obtain the position of the target in the original image data, and the position of the target can be marked through a first preset form and a second preset form to obtain first image data corresponding to the first preset form and second image data corresponding to the second preset form, therefore, the first boundary data of the first image data and the second boundary data of the second image data can be identified, and then can be associated with the first boundary data and the second boundary data and its corresponding target type to store, it can be seen that the invention can automatically identify the boundary of the target in the original image, and the identified boundary data can be automatically associated and stored with the target type corresponding to the target, so that the efficiency of marking the picture is improved.
The following describes how to automatically associate and store the identified boundary data with the target type corresponding to the target, and improves the efficiency of labeling the picture, with reference to the accompanying drawings:
in an embodiment of the present invention, an Instance Segmentation (Instance Segmentation) model may be constructed through a neural network, and the raw image data may be a picture or video data acquired by an image acquisition device (e.g., a camera, an endoscope, etc.). The raw image data usually includes objects to be detected, for example, when the raw image data is an intra-cavity image of a patient acquired by an endoscope, objects such as a lesion region are usually detected from the intra-cavity image, and one or more objects can be detected from the same raw image data.
In the embodiment of the present invention, when an object is detected from original image data by an instance segmentation model, the object may be identified, where the identification manner may be to identify a target position where the object is located through a first preset form, and may also be to identify the target position where the object is located through a second preset form, where the first preset form may be a Bounding-Box (BBox), and the shape of the Bounding Box may be diversified, for example, the Bounding Box may be a rectangular Bounding Box or a circular Bounding Box; the second preset form may be a Mask (Mask) with an irregular shape, the target in the original image data may be detected through the instance segmentation model, the detected target may be subjected to semantic segmentation to obtain a target position of the target in the original image data, and then a Mask may be generated in an area corresponding to the target position, where the Mask is a way of identifying the target in the second preset form.
Furthermore, first image data for identifying the target in a first preset form can be displayed at the target position in the original image data; second image data for marking the target in a second preset form can be displayed at the target position in the original image data; the first image data may include first boundary data of an enclosure frame corresponding to a first preset form, and the first boundary data may determine a specific position of the enclosure frame in the original image data; the second boundary data may include second boundary data of a mask having an irregular shape corresponding to a second preset form, and the shape and the specific position of the mask may be determined in the original image data by the second boundary data. The target type corresponding to the target can be determined, and the first boundary data can be stored into a first boundary data storage file corresponding to the target type of the target, so that the first boundary data is associated with the target type of the target contained in the original image data; the second boundary data may be stored in a second boundary data storage file corresponding to the target type of the target, so that the second boundary data is associated with the target type of the target included in the original image data.
Referring to fig. 2, fig. 2 is a schematic flowchart of a target boundary recording method based on target detection according to another embodiment of the present invention, and the flowchart of the target boundary recording method based on target detection according to another embodiment of the present invention shown in fig. 2 includes:
step S201, performing target detection on original image data through an example segmentation model to obtain first image data displaying a first preset form identifier at a target position in the original image data and obtain second image data displaying a second preset form identifier at the target position; the original image data is image data based on time sequence;
step S202, acquiring first detection content data based on time sequence from the first image data; the first detection content data at least comprises a target type of a target identified by the first image data, a current moment and first boundary data;
step S203, traversing the first detection content data based on time sequence, and constructing a type mapping dictionary; the type mapping dictionary comprises at least one time mapping dictionary, the time mapping dictionaries correspond to target types one by one, and the target types corresponding to any two time mapping dictionaries are different;
step S204, based on the type mapping dictionary, store the first boundary data in the first detected content data into a first boundary data storage file corresponding to the target type of the target.
By implementing the above steps S201 to S204, the first detection content data based on time sequence may be acquired from the first video data, and the type mapping dictionary may be constructed based on the target type, the current time, and the first boundary data included in the first detection content data, where the type mapping dictionary may represent a mapping relationship between the target type, the current time, and the first boundary data, and further, the first boundary data and the target type of the target may be stored in an associated manner based on the type mapping dictionary, so that the accuracy of associating the first boundary data and the target type is improved.
In an embodiment of the present invention, the first boundary data includes a horizontal numerical value, a vertical numerical value, an image width, and an image height corresponding to the first image data. Therefore, the data types contained in the first boundary data are comprehensive, and the position of the target can be accurately determined through the first boundary data.
In this embodiment, each frame of image data in the original image data may be subjected to target detection by using an example segmentation model, so that a plurality of targets in the original image data may be obtained, and the obtained plurality of targets may be targets based on a time sequence.
In this embodiment of the present invention, the first detection content data may include a target type based on a time sequence, a current time, and first boundary data, and the target type corresponding to the current time and the first boundary data may be combined into a tuple single _ tuple, where the tuple single _ tuple may include a current time, a target type label corresponding to the current time, and first boundary data BBox:
single_tuple=(time,label,BBox)
the current time may be a count of each frame of image in the original image data of the video type, and the BBox may represent boundary data of a rectangular bounding box corresponding to the first boundary data, for example, a planar coordinate system may be established based on each frame of image of the original image data, the bounding box BBox may include a horizontal value x of any vertex of the rectangular bounding box in the planar coordinate system of the bounding box on a horizontal axis and a longitudinal value y on a vertical axis, and may further include an image width w and an image height h of the rectangular bounding box, specifically:
BBox=[x,y,w,h]
further, a queue (e.g., a linked list) matrix _ array may be created, the tuple single _ tuple corresponding to each frame image in the original image data may be stored in the queue matrix _ array, and the storage manner may sequentially store the tuple single _ tuple corresponding to each frame image in the queue matrix _ array based on a time sequence.
As an optional implementation manner, the step S203 may specifically include, based on traversing the first detection content data in a time sequence, a manner of constructing a type mapping dictionary, which includes the following steps:
traversing the first detection content data based on time sequence to construct a time mapping dictionary, wherein the time mapping dictionary comprises at least one time key value pair of the current time and first boundary data, and the current time and the first boundary data in the time key value pair are obtained from the same first detection content data;
and constructing a type mapping dictionary based on the first detection content data and the time mapping dictionary, wherein the type mapping dictionary comprises at least one target type and a type key value pair of the time mapping dictionary, and the target type and the current time and the first boundary data in the time mapping dictionary belong to the same first detection content data.
In this embodiment, a time mapping dictionary may be first constructed based on the current time and the first boundary data in the first detection content data, and the mapping relationship between the current time and the first boundary data is represented by the time mapping dictionary, or a type mapping dictionary may be constructed based on the target type and the time mapping dictionary in the first detection content data, so as to represent the mapping relationship between the target type and any one of the time mapping dictionaries by the type mapping dictionary, and further determine the mapping relationship between the target type and the current time and the first boundary data included in the corresponding time mapping dictionary based on the type mapping dictionary, thereby ensuring the accuracy of the mapping relationship in the constructed type mapping dictionary.
In the embodiment of the present invention, a type mapping dictionary core _ fact may be constructed based on first detection content data, the type mapping dictionary core _ fact may include one or more Key Value pairs, a Key (Key) in the type mapping dictionary core _ fact may be a target type label in the first detection content data, target types label of any two keys in the type mapping dictionary core _ fact are different, a Value (Value) corresponding to a Key in the type mapping dictionary core _ fact may be a time mapping dictionary label _ fact, a current time and first boundary data included in the time mapping dictionary label _ fact corresponding to any target type label correspond to the target type label, and a mapping relationship is as follows:
coord_dict[label]=label_dict
in the embodiment of the present invention, a time mapping dictionary label _ fact may be constructed based on first detection content data, the time mapping dictionary label _ fact may include one or more key value pairs, a key in the time mapping dictionary label _ fact may be a current time in the first detection content data, current times of any two keys in the time mapping dictionary label _ fact are different, a value corresponding to the key in the time mapping dictionary label _ fact may be first boundary data BBox corresponding to the current time, and a mapping relationship is as follows:
label_dict[time]=BBox
as can be seen, the type mapping dictionary coord _ fact may include one or more keys of the target type label, and any one of the keys of the target type label may map a time mapping dictionary label _ fact, where a key-value pair in the time mapping dictionary label _ fact corresponds to the target type label; one time mapping dictionary label _ fact may include one or more keys of the current time, and any key of the current time may map a first boundary data BBox, and the first boundary data BBox may correspond to the current time.
Further, after constructing a type mapping dictionary based on the first detected content data and the time mapping dictionary, the method may further include the steps of:
when detecting that a time key value pair corresponding to a plurality of first boundary data at the current moment exists in the time mapping dictionary, calculating to obtain an average value of the plurality of first boundary data;
determining a time-key value pair of the current time and the average value based on the average value.
By implementing the implementation manner, when it is detected that one time key value pair corresponding to a plurality of first boundary data at the current time exists in the time mapping dictionary, an average value of the plurality of first boundary data can be calculated, so that the current time and the average value establish mapping, and the mapping relation in the time mapping dictionary is simplified.
In the embodiment of the present invention, if there is a time key value pair corresponding to multiple first boundary data BBox at a current time in the time mapping dictionary label _ fact, it may occur that there are multiple bounding boxes corresponding to a target position in the same frame of image, which may result in that accurate boundary data at the target position cannot be determined, and therefore, an average value of multiple first boundary data may be calculated, and the average value is used as the first boundary data BBox corresponding to the current time, and the specific calculation method is as follows:
Figure BDA0003114464790000111
where n is the number of the plurality of first boundary data BBox corresponding to the current time.
In addition, the type mapping dictionary coord _ fact may be traversed, and the time mapping dictionaries label _ fact in the type mapping dictionary coord _ fact may be stored in the first boundary data storage files 1_ label of the target types label corresponding to the time mapping dictionaries label _ fact, respectively.
For example, the type mapping dictionary coord _ dit may be traversed, a first boundary data storage file1_ label may be created for each target type label in the type mapping dictionary coord _ dit, a horizontal axis array x1_ ar, a vertical axis array y1_ arr, a wide array w1_ arr, a high array h1_ arr, and a time array t1_ arr may be initialized in each first boundary data storage file1_ label, then a time mapping dictionary label _ dit mapped by any target type label may be traversed, a current time in the time mapping dictionary label _ dit and the first boundary data BBox may be stored in a first boundary data storage file1_ label corresponding to the target type label, wherein the current time is stored in a time array b 1_ arr, a horizontal axis value in the first boundary data bbx _ arr is stored in a horizontal axis array x _ arr, a vertical axis value in a horizontal axis array b 1_ arr, and a vertical axis value in a vertical axis array r _ arr _ label are stored in a vertical axis array b 493 _ label 4 Image width w is stored into width array w1_ arr and image height h is stored into height array h1_ arr. When the traversal of the type mapping dictionary coord _ fact is completed, the first boundary data storage file1_ label corresponding to each target type label may be generated, and the first boundary data storage file1_ label may be stored.
Referring to fig. 3 and 4 together, fig. 3 is a schematic structural diagram of a first boundary data storage file according to an embodiment of the present invention; FIG. 4 is a diagram illustrating a file naming format of a first boundary data storage file according to an embodiment of the present invention; where t in fig. 3 may represent the current time, x may represent a horizontal value in the first boundary data BBox, y may represent a vertical value in the first boundary data BBox, w may represent an image width in the first image data, h may represent an image height in the first image data, and t may correspond to x, y, w, and h, that is, data in each column in fig. 3 is in one-to-one correspondence, for example, the first column data t is 85, x is 90, y is 98, w is 600, and h is 417, and it may be considered that the current time 85 corresponds to the horizontal value 90 in the first boundary data BBox, the vertical data 98, the image width 600 in the first image data, and the image height 417 in one-to-one correspondence. Fig. 4 is a schematic diagram of a file naming format of a csv file format generated according to data included in the first boundary data storage file in fig. 3, where one target type may correspond to the first boundary data storage file of one csv file format.
Step S205, acquiring a mask image from the second image data;
step S206, carrying out image processing on the mask image to obtain second boundary data;
step S207, storing the second boundary data into a second boundary data storage file corresponding to the target type of the target location.
By implementing the above steps S205 to S207, the mask image may be acquired from the second image data, and then the second boundary data corresponding to the mask image may be determined, so that the second boundary data is stored in association with the target type, and the accuracy of association between the second boundary data and the target type is ensured.
In the embodiment of the present invention, the mask image may be a polygonal or irregular mask, and the mask region is filled with color values, so that it is only necessary to record boundary values of the mask image without recording numerical values in the mask region, that is, the mask image may be processed to obtain the second boundary data of the mask image.
As an optional implementation manner, the step S206 may perform image processing on the mask image to obtain the second boundary data specifically includes the following steps:
carrying out gray scale transformation on the mask image to obtain a gray scale mask image corresponding to the mask image;
calculating the gray mask image through a multi-level edge detection algorithm to obtain mask boundary data;
and carrying out binarization processing on the mask boundary data to obtain second boundary data.
By implementing the implementation mode, the gray mask image can be obtained by carrying out gray scale conversion on the mask image, the gray mask image is operated by a multi-level edge detection algorithm to obtain mask boundary data, and binarization processing is carried out on the mask boundary data to obtain second boundary data, so that the accuracy of the second boundary data is improved.
In the embodiment of the invention, the mask image can be subjected to gray scale transformation through the normalized image matrix, so that the mask image is transformed into a gray scale mask image, referring to fig. 5, fig. 5 is a mask image, wherein the rectangular bounding box in FIG. 5 identifies a target that may be a yellow cauliflower tumor, the specific location area of the yellow cauliflower tumor may be further identified in the rectangular bounding box in FIG. 5, and different color values can be filled in the specific position area of the determined yellow cauliflower tumor, i.e., the regions filled with different color values for the positions may be determined as masks, thereby obtaining a mask image, see fig. 6, fig. 6 is a gray mask image, wherein, color values of the masks in the mask image of fig. 5 may be converted to determine the gray mask corresponding to the masks, so as to obtain a gray mask image containing the gray masks; moreover, the gray mask image may be operated by a multi-level edge detection algorithm (e.g., a Canny edge detection operator) to obtain a boundary of the gray mask, please refer to fig. 7, where fig. 7 is a mask boundary diagram, which may represent mask boundary data of the mask image; and binarizing the mask boundary data in the mask boundary map, that is, performing an inversion operation on each boundary pixel in the mask boundary map to obtain a binarized mask boundary, where the binarized mask boundary may be determined as the second boundary data, please refer to fig. 8, and fig. 8 is a binarized mask boundary map.
In addition, a second boundary data storage file2_ label corresponding to the target type may be created, each second boundary data storage file2_ label may initialize the horizontal axis array x2_ arr, the vertical axis array y2_ arr, and the time array t2_ arr in advance, the obtained second boundary data of any frame of image may be stored into the second boundary data storage file2_ label corresponding to the target type of the target in the image, that is, the current time may be stored into the time array t2_ arr, the horizontal value x of each coordinate point included in the second boundary data may be stored into the horizontal axis array x2_ arr, and the vertical value y of each coordinate point and the horizontal value x may be stored into the vertical axis array y2_ arr. When the traversal of each frame of image in the original image data is completed, the second boundary data storage file2_ label corresponding to each target type label may be generated, and the second boundary data storage file2_ label may be stored.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a second boundary data storage file according to an embodiment of the present invention; wherein, column a in fig. 9 may represent the current time, column B may represent X and y values corresponding to the current time, X may represent a horizontal numerical value of each coordinate point included in the second boundary data, y may represent a vertical numerical value of each coordinate point included in the second boundary data, and data in columns C to X may be a horizontal numerical value or a vertical numerical value of each coordinate point included in the second boundary data; taking a first row and a second row as examples, columns a in the first row and the second row are both current time 85, column B in the first row indicates a horizontal numerical value X of each coordinate point included in second boundary data corresponding to the current time 85, columns C to X in the first row are specific horizontal numerical values of each coordinate point included in the second boundary data, column B in the second row indicates a vertical numerical value y of each coordinate point included in the second boundary data corresponding to the current time 85, columns C to X in the second row are specific vertical numerical values of each coordinate point included in the second boundary data, and the current time of each row corresponds to the horizontal numerical value or the vertical numerical value of each coordinate point included in the second boundary data of the row one to one.
Referring to fig. 10 and fig. 11 together, fig. 10 is a schematic interface diagram of a clinical artificial intelligence assistance system according to an embodiment of the invention, and fig. 11 is a schematic interface diagram of an operating device output by the clinical artificial intelligence assistance system according to the embodiment of the invention; the clinical artificial intelligence auxiliary system corresponding to fig. 10 may be configured to control the operation device to intelligently identify an intraluminal lesion of a patient, and fig. 11 may be an examination interface when the operation device intelligently identifies an intraluminal lesion of a patient.
Specifically, fig. 10 may include 5 regions, where the region (i) may be a time selection region, the region (ii) may be a patient search region, the region (iii) may be a system management region, the region (iv) may be an abnormal tissue region, and the region (iv) may be a parameter setting region; wherein the content of the first and second substances,
the region (i) can select the patient to be viewed according to the input time and display the patient information to the system user (such as a doctor);
the patient searching sub-area can quickly inquire the case information of the patient for the patient name input by a system user, and the patient information output sub-area can output the searched case information of the patient;
the area III at least comprises a newly-built patient sub-area, a system connection sub-area, a system information sub-area and a system operation sub-area, wherein the newly-built patient sub-area can establish patient case information according to the patient information input by a system user when the case information of the current patient is not inquired or the system is not connected with a workstation system; the system connection sub-area can connect the current system with the workstation system; the system information sub-area can output information about the version number, company brief introduction, copyright statement and the like of the clinical artificial intelligence auxiliary system; the system operation sub-area can respond to instructions input by a system user, so that the operations of minimizing, switching or closing the size of a system interface and the like are realized;
the area IV at least comprises a name sub-area and an abnormal tissue screenshot temporary storage sub-area, wherein the name sub-area can output the name of the abnormal tissue in the abnormal tissue screenshot temporary storage area, and the abnormal tissue screenshot temporary storage area can output a screenshot of a suspected lesion area in the body of the patient;
the region (c) at least comprises a transparency subregion, an automatic newly-built subregion, an identification probability subregion, a screenshot probability subregion, an automatic screenshot subregion, a video playing subregion, a live broadcast subregion and a storage subregion, wherein the transparency subregion can be used for setting the transparency of a mask covering a lesion region in a patient body; the automatic new sub-area can be used for automatically creating patient information with an undefined name when a workstation is not connected or a patient is forgotten to be created and directly examined; the identification probability subregion can be used for setting the identification probability of the clinical artificial intelligence auxiliary system to the lesion region of the patient; the screenshot probability sub-region can be used for setting screenshot probability, and when the lesion probability of the lesion region identified by the system reaches the screenshot probability, screenshot is carried out on the lesion region; the automatic screenshot sub-area can be used for setting an automatic toxic lesion area of the system to perform screenshot; the video playing sub-area can be used for selecting a video of a patient examination process to be checked and outputting and playing the video; the live broadcast sub-area can be used for directly opening a live broadcast page when the system is in a connection state with the workstation and after patient information is transmitted; the memory sub-region can be used for selecting image information needing to be saved/deleted for saving/deleting.
The clinical artificial intelligence assistance system shown in fig. 11 can control the operation device and can perform any of the steps of the target boundary recording method based on target detection in fig. 1 and 2.
In addition, fig. 11 may include 4 regions, where a may be a result identification region, B may be an endoscope vision region, C may be a prompt region, and D may be a picture buffer region, where:
the area A at least comprises a lesion probability subregion, an electrotome bleeding probability subregion and a visual field prompting subregion, wherein the lesion probability subregion can be used for judging the lesion area according to a preset identification probability and outputting a lesion probability and a conclusion; the electrotome probability subregion can be used for identifying the surgical instrument and outputting the identification probability and conclusion of the surgical instrument; the electric burn bleeding probability subregion can be used for judging a bleeding region or a burn region according to a preset identification probability and outputting a bleeding probability/burn probability and a conclusion; the visual field prompt subregion can be used for prompting the visual field definition of the current endoscope;
the region B can output an intracavity image of the patient acquired by an endoscope, and when a lesion region, a bleeding region or a burn region is identified, a mask is output on the lesion region, the bleeding region or the burn region according to preset transparency;
the region C can detect the intracavity image of the patient acquired by the endoscope and output the conclusion obtained by the detection;
region D may store the screenshot of the patient's intracavity image derived from the system screenshot to and output in region D.
The method and the device can automatically identify the boundary of the target in the original image, and can automatically store the identified boundary data in association with the target type corresponding to the target, so that the efficiency of marking the image is improved. In addition, the method and the device can also improve the accuracy of the association of the first boundary data and the target type. In addition, the invention can also ensure that the position of the target can be accurately determined through the first boundary data. In addition, the method and the device can also ensure the accuracy of the mapping relation in the constructed type mapping dictionary. In addition, the invention can also simplify the mapping relation in the time mapping dictionary. In addition, the accuracy of the association of the second boundary data and the target type can be ensured. In addition, the present invention can also improve the accuracy of the second boundary data.
Exemplary devices
Having described the method of an exemplary embodiment of the present invention, an object boundary recording apparatus based on object detection of an exemplary embodiment of the present invention will next be described with reference to fig. 12, the apparatus including:
a detecting unit 1201, configured to perform target detection on original image data through an instance segmentation model, to obtain first image data in which a target position in the original image data displays a first preset form identifier, and obtain second image data in which the target position displays a second preset form identifier;
a first obtaining unit 1202, configured to obtain first boundary data of the first image data obtained by the detecting unit 1201, and store the first boundary data in a first boundary data storage file corresponding to a target type of the target;
a second obtaining unit 1203, configured to obtain second boundary data of the second image data obtained by the detecting unit 1201, and store the second boundary data in a second boundary data storage file corresponding to the target type of the target.
Exemplary Medium
Having described the method and apparatus of the exemplary embodiment of the present invention, next, a computer-readable storage medium of the exemplary embodiment of the present invention is described with reference to fig. 13, please refer to fig. 13, which illustrates a computer-readable storage medium being an optical disc 130 having a computer program (i.e., a program product) stored thereon, where the computer program, when executed by a processor, implements the steps described in the above method embodiment, for example, performing object detection on original image data through an example segmentation model to obtain first image data showing a first preset form identifier at an object position in the original image data, and obtaining second image data showing a second preset form identifier at the object position; acquiring first boundary data of the first image data, and storing the first boundary data into a first boundary data storage file corresponding to a target type of a target; acquiring second boundary data of the second image data, and storing the second boundary data into a second boundary data storage file corresponding to the target type of the target; the specific implementation of each step is not repeated here.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, or other optical and magnetic storage media, which are not described in detail herein.
Exemplary computing device
Having described the methods, media, and apparatus of exemplary embodiments of the present invention, a computing device for object boundary recording based on object detection of exemplary embodiments of the present invention is next described with reference to FIG. 14.
FIG. 14 illustrates a block diagram of an exemplary computing device 140, which computing device 140 may be a computer system or server, suitable for use in implementing embodiments of the present invention. The computing device 140 shown in FIG. 14 is only one example and should not impose any limitations on the functionality or scope of use of embodiments of the present invention.
As shown in fig. 14, components of computing device 140 may include, but are not limited to: one or more processors or processing units 1401, a system memory 1402, and a bus 1403 connecting the various system components (including the system memory 1402 and the processing unit 1401).
Computing device 140 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computing device 140 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 1402 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)14021 and/or cache memory 14022. The computing device 140 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, ROM14023 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 14, and commonly referred to as a "hard drive"). Although not shown in FIG. 14, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 1403 by one or more data media interfaces. Included in system memory 1402 may be at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the invention.
A program/utility 14025 having a set (at least one) of program modules 14024 may be stored, for example, in system memory 1402, and such program modules 14024 include but are not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment. Program modules 14024 generally carry out the functions and/or methodologies of embodiments of the present invention as described herein.
Computing device 140 may also communicate with one or more external devices 1404 (e.g., keyboard, pointing device, display, etc.). Such communication may occur via input/output (I/O) interfaces 605. Also, computing device 140 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) through network adapter 1406. As shown in FIG. 14, the network adapter 1406 communicates with other modules of the computing device 140, such as the processing unit 1401, over the bus 1403. It should be appreciated that although not shown in FIG. 14, other hardware and/or software modules may be used in conjunction with computing device 140.
The processing unit 1401 executes various functional applications and data processing by running a program stored in the system memory 1402, for example, performing object detection on the original image data through an instance segmentation model, obtaining first image data showing a first preset form identifier at an object position in the original image data, and obtaining second image data showing a second preset form identifier at the object position; acquiring first boundary data of the first image data, and storing the first boundary data into a first boundary data storage file corresponding to a target type of a target; and acquiring second boundary data of the second image data, and storing the second boundary data into a second boundary data storage file corresponding to the target type of the target. The specific implementation of each step is not repeated here. It should be noted that although in the above detailed description several units/modules or sub-units/sub-modules of the object boundary recording device based on object detection are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module according to embodiments of the invention. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
In the description of the present invention, it should be noted that the terms "first", "second", and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
Through the above description, the embodiments of the present invention provide the following technical solutions, but are not limited thereto:
1. a target boundary recording method based on target detection comprises the following steps:
performing target detection on original image data through an example segmentation model to obtain first image data displaying a first preset form identifier at a target position in the original image data and obtain second image data displaying a second preset form identifier at the target position;
acquiring first boundary data of the first image data, and storing the first boundary data into a first boundary data storage file corresponding to the target type of the target;
and acquiring second boundary data of the second image data, and storing the second boundary data into a second boundary data storage file corresponding to the target type of the target.
2. The method for recording a target boundary based on target detection according to claim 1, wherein the original image data is time-series-based image data, first boundary data of the first image data is obtained, and the first boundary data is stored in a first boundary data storage file corresponding to a target type of the target, and the method includes:
acquiring first detection content data based on time sequence from the first image data; the first detection content data at least comprises a target type of a target identified by the first image data, a current moment and first boundary data;
traversing the first detection content data based on time sequence to construct a type mapping dictionary; the type mapping dictionary comprises at least one time mapping dictionary, the time mapping dictionaries correspond to target types one by one, and the target types corresponding to any two time mapping dictionaries are different;
and storing the first boundary data in the first detection content data into a first boundary data storage file corresponding to the target type of the target based on the type mapping dictionary.
3. The method for recording a target boundary based on target detection according to claim 2, wherein traversing the first detection content data based on time sequence to construct a type mapping dictionary, comprises:
traversing the first detection content data based on time sequence to construct a time mapping dictionary, wherein the time mapping dictionary comprises at least one time key value pair of the current time and first boundary data, and the current time and the first boundary data in the time key value pair are obtained from the same first detection content data;
and constructing a type mapping dictionary based on the first detection content data and the time mapping dictionary, wherein the type mapping dictionary comprises at least one target type and a type key value pair of the time mapping dictionary, and the target type and the current time and the first boundary data in the time mapping dictionary belong to the same first detection content data.
4. The object boundary recording method based on object detection according to claim 3, after constructing a type mapping dictionary based on the first detection content data and the time mapping dictionary, the method further comprises:
when detecting that a time key value pair corresponding to a plurality of first boundary data at the current moment exists in the time mapping dictionary, calculating to obtain an average value of the plurality of first boundary data;
determining a time-key value pair of the current time and the average value based on the average value.
5. The method for recording a target boundary based on target detection according to any of claims 1 to 4, wherein the first boundary data includes a horizontal numerical value, a vertical numerical value, an image width and an image height corresponding to the first image data.
6. The method for recording a target boundary based on target detection according to any one of claims 1 to 4, wherein acquiring second boundary data of the second image data, and storing the second boundary data in a second boundary data storage file corresponding to a target type of the target location includes:
acquiring a mask image from the second image data;
performing image processing on the mask image to obtain second boundary data;
and storing the second boundary data into a second boundary data storage file corresponding to the target type of the target position.
7. The method for recording a target boundary based on target detection according to claim 6, wherein the image processing is performed on the mask image to obtain second boundary data, and the method includes:
carrying out gray scale transformation on the mask image to obtain a gray scale mask image corresponding to the mask image;
calculating the gray mask image through a multi-level edge detection algorithm to obtain mask boundary data;
and carrying out binarization processing on the mask boundary data to obtain second boundary data.
8. An object boundary recording apparatus based on object detection, comprising:
the detection unit is used for carrying out target detection on original image data through an example segmentation model to obtain first image data displaying a first preset form identifier at a target position in the original image data and obtain second image data displaying a second preset form identifier at the target position;
the first acquisition unit is used for acquiring first boundary data of the first image data and storing the first boundary data into a first boundary data storage file corresponding to the target type of the target;
and the second acquisition unit is used for acquiring second boundary data of the second image data and storing the second boundary data into a second boundary data storage file corresponding to the target type of the target.
9. The object boundary recording device according to claim 8, wherein the original image data is time-series-based image data, and the first acquiring unit includes:
a first acquisition subunit, configured to acquire time-series-based first detection content data from the first image data; the first detection content data at least comprises a target type of a target identified by the first image data, a current moment and first boundary data;
the construction subunit is used for traversing the first detection content data based on time sequence and constructing a type mapping dictionary; the type mapping dictionary comprises at least one time mapping dictionary, the time mapping dictionaries correspond to target types one by one, and the target types corresponding to any two time mapping dictionaries are different;
a first storage subunit, configured to store, based on the type mapping dictionary, the first boundary data in the first detected content data into a first boundary data storage file corresponding to a target type of the target.
10. The object boundary recording apparatus based on object detection according to claim 9, wherein the constructing subunit includes:
the first construction module is used for traversing the first detection content data based on time sequence and constructing a time mapping dictionary, wherein the time mapping dictionary comprises at least one time key value pair of current time and first boundary data, and the current time and the first boundary data in the time key value pair are obtained from the same first detection content data;
and the second construction module is used for constructing a type mapping dictionary based on the first detection content data and the time mapping dictionary, wherein the type mapping dictionary comprises at least one type key value pair of a target type and the time mapping dictionary, and the target type, the current time in the time mapping dictionary and the first boundary data belong to the same first detection content data.
11. The object boundary recording apparatus based on object detection according to claim 10, the apparatus further comprising:
the calculation unit is used for calculating to obtain an average value of a plurality of first boundary data after the second construction module constructs a type mapping dictionary based on the data file and the time mapping dictionary and when the time mapping dictionary is detected to have a time key value pair corresponding to the plurality of first boundary data at the current moment;
and the determining unit is used for determining the time key value pair of the current moment and the average value based on the average value.
12. The object boundary recording apparatus based on object detection according to any one of claims 8 to 11, wherein the first boundary data includes a horizontal numerical value, a vertical numerical value, an image width and an image height corresponding to the first image data.
13. The object boundary recording device based on object detection according to any one of claims 8 to 11, wherein the second acquiring unit includes:
a second obtaining subunit, configured to obtain a mask image from the second image data;
the processing subunit is used for carrying out image processing on the mask image to obtain second boundary data;
and the second storage subunit is used for storing the second boundary data into a second boundary data storage file corresponding to the target type of the target position.
14. The object boundary recording apparatus based on object detection according to claim 13, wherein the processing subunit includes:
the conversion module is used for carrying out gray scale conversion on the mask image to obtain a gray scale mask image corresponding to the mask image;
the operation module is used for operating the gray mask image through a multi-level edge detection algorithm to obtain mask boundary data;
and the processing module is used for carrying out binarization processing on the mask boundary data to obtain second boundary data.
15. A clinical artificial intelligence assistance system that performs the object boundary registration method based on object detection of any of schemes 1-7.
16. A storage medium storing a program, wherein the storage medium stores a computer program which, when executed by a processor, implements an object boundary recording method based on object detection according to any one of aspects 1 to 7.
17. A computing device comprising the storage medium of claim 16.

Claims (13)

1. A target boundary recording method based on target detection comprises the following steps:
performing target detection on original image data through an example segmentation model to obtain first image data displaying a first preset form identifier at a target position in the original image data and obtain second image data displaying a second preset form identifier at the target position; wherein the first preset form is an enclosure frame; the first image data comprises first boundary data of an enclosure frame corresponding to the first preset form, and the first boundary data is used for determining the specific position of the enclosure frame in the original image data; the second preset form is a mask with an irregular shape; the second boundary data comprises second boundary data of a mask of an irregular shape corresponding to the second preset form, and the second boundary data is used for determining the shape and the specific position of the mask in the original image data;
acquiring first boundary data of the first image data, and storing the first boundary data into a first boundary data storage file corresponding to the target type of the target;
acquiring second boundary data of the second image data, and storing the second boundary data into a second boundary data storage file corresponding to the target type of the target;
the method for acquiring the original image data is time sequence-based image data, acquiring first boundary data of the first image data, and storing the first boundary data into a first boundary data storage file corresponding to a target type of the target includes:
acquiring first detection content data based on time sequence from the first image data; the first detection content data at least comprises a target type of a target identified by the first image data, a current moment and first boundary data;
traversing the first detection content data based on time sequence to construct a type mapping dictionary; the type mapping dictionary comprises at least one time mapping dictionary, the time mapping dictionaries correspond to target types one by one, and the target types corresponding to any two time mapping dictionaries are different;
storing the first boundary data in the first detection content data into a first boundary data storage file corresponding to a target type of the target based on the type mapping dictionary;
wherein traversing the first detection content data based on time sequence to construct a type mapping dictionary comprises:
traversing the first detection content data based on time sequence to construct a time mapping dictionary, wherein the time mapping dictionary comprises at least one time key value pair of the current time and first boundary data, and the current time and the first boundary data in the time key value pair are obtained from the same first detection content data;
and constructing a type mapping dictionary based on the first detection content data and the time mapping dictionary, wherein the type mapping dictionary comprises at least one target type and a type key value pair of the time mapping dictionary, and the target type and the current time and the first boundary data in the time mapping dictionary belong to the same first detection content data.
2. The object boundary recording method based on object detection according to claim 1, after constructing a type mapping dictionary based on the first detection content data and the time mapping dictionary, the method further comprising:
when detecting that a time key value pair corresponding to a plurality of first boundary data at the current moment exists in the time mapping dictionary, calculating to obtain an average value of the plurality of first boundary data;
determining a time-key value pair of the current time and the average value based on the average value.
3. The object boundary recording method based on object detection according to claim 1 or 2, wherein the first boundary data comprises a horizontal numerical value, a vertical numerical value, an image width and an image height corresponding to the first image data.
4. The target boundary recording method based on target detection according to claim 1 or 2, wherein the step of acquiring second boundary data of the second image data and storing the second boundary data into a second boundary data storage file corresponding to a target type of the target position includes:
acquiring a mask image from the second image data;
performing image processing on the mask image to obtain second boundary data;
and storing the second boundary data into a second boundary data storage file corresponding to the target type of the target position.
5. The target detection-based target boundary recording method according to claim 4, wherein the image processing of the mask image to obtain second boundary data comprises:
carrying out gray scale transformation on the mask image to obtain a gray scale mask image corresponding to the mask image;
calculating the gray mask image through a multi-level edge detection algorithm to obtain mask boundary data;
and carrying out binarization processing on the mask boundary data to obtain second boundary data.
6. An object boundary recording apparatus based on object detection, comprising:
the detection unit is used for carrying out target detection on original image data through an example segmentation model to obtain first image data displaying a first preset form identifier at a target position in the original image data and obtain second image data displaying a second preset form identifier at the target position; wherein the first preset form is an enclosure frame; the first image data comprises first boundary data of an enclosure frame corresponding to the first preset form, and the first boundary data is used for determining the specific position of the enclosure frame in the original image data; the second preset form is a mask with an irregular shape; the second boundary data comprises second boundary data of a mask of an irregular shape corresponding to the second preset form, and the second boundary data is used for determining the shape and the specific position of the mask in the original image data;
the first acquisition unit is used for acquiring first boundary data of the first image data and storing the first boundary data into a first boundary data storage file corresponding to the target type of the target;
the second acquisition unit is used for acquiring second boundary data of the second image data and storing the second boundary data into a second boundary data storage file corresponding to the target type of the target;
wherein the original image data is time-series-based image data, and the first acquiring unit includes:
a first acquisition subunit, configured to acquire time-series-based first detection content data from the first image data; the first detection content data at least comprises a target type of a target identified by the first image data, a current moment and first boundary data;
the construction subunit is used for traversing the first detection content data based on time sequence and constructing a type mapping dictionary; the type mapping dictionary comprises at least one time mapping dictionary, the time mapping dictionaries correspond to target types one by one, and the target types corresponding to any two time mapping dictionaries are different;
a first storage subunit, configured to store, based on the type mapping dictionary, the first boundary data in the first detected content data into a first boundary data storage file corresponding to a target type of the target;
wherein the building subunit comprises:
the first construction module is used for traversing the first detection content data based on time sequence and constructing a time mapping dictionary, wherein the time mapping dictionary comprises at least one time key value pair of current time and first boundary data, and the current time and the first boundary data in the time key value pair are obtained from the same first detection content data;
and the second construction module is used for constructing a type mapping dictionary based on the first detection content data and the time mapping dictionary, wherein the type mapping dictionary comprises at least one type key value pair of a target type and the time mapping dictionary, and the target type, the current time in the time mapping dictionary and the first boundary data belong to the same first detection content data.
7. The object detection-based object boundary recording device of claim 6, the device further comprising:
the calculation unit is used for calculating to obtain an average value of a plurality of first boundary data after the second construction module constructs a type mapping dictionary based on the data file and the time mapping dictionary and when the time mapping dictionary is detected to have a time key value pair corresponding to the plurality of first boundary data at the current moment;
and the determining unit is used for determining the time key value pair of the current moment and the average value based on the average value.
8. The object boundary recording device based on object detection according to claim 6 or 7, wherein the first boundary data comprises a horizontal numerical value, a vertical numerical value, an image width and an image height corresponding to the first image data.
9. The object boundary recording device based on object detection according to claim 6 or 7, the second obtaining unit comprising:
a second obtaining subunit, configured to obtain a mask image from the second image data;
the processing subunit is used for carrying out image processing on the mask image to obtain second boundary data;
and the second storage subunit is used for storing the second boundary data into a second boundary data storage file corresponding to the target type of the target position.
10. The object boundary recording device based on object detection as claimed in claim 9, the processing subunit comprising:
the conversion module is used for carrying out gray scale conversion on the mask image to obtain a gray scale mask image corresponding to the mask image;
the operation module is used for operating the gray mask image through a multi-level edge detection algorithm to obtain mask boundary data;
and the processing module is used for carrying out binarization processing on the mask boundary data to obtain second boundary data.
11. A clinical artificial intelligence assistance system that performs the object boundary registration method based on object detection of any one of claims 1-5.
12. A storage medium storing a program, wherein the storage medium stores a computer program which, when executed by a processor, implements an object boundary recording method based on object detection as claimed in any one of claims 1-5.
13. A computing device comprising the storage medium of claim 12.
CN202110658862.3A 2021-06-15 2021-06-15 Target boundary recording method and device based on target detection and computing equipment Active CN113343999B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110658862.3A CN113343999B (en) 2021-06-15 2021-06-15 Target boundary recording method and device based on target detection and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110658862.3A CN113343999B (en) 2021-06-15 2021-06-15 Target boundary recording method and device based on target detection and computing equipment

Publications (2)

Publication Number Publication Date
CN113343999A CN113343999A (en) 2021-09-03
CN113343999B true CN113343999B (en) 2022-04-08

Family

ID=77477004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110658862.3A Active CN113343999B (en) 2021-06-15 2021-06-15 Target boundary recording method and device based on target detection and computing equipment

Country Status (1)

Country Link
CN (1) CN113343999B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359208A (en) * 2018-09-13 2019-02-19 郑津 A kind of distributed method and system of precisely lossless mark image instance
CN109872333A (en) * 2019-02-20 2019-06-11 腾讯科技(深圳)有限公司 Medical image dividing method, device, computer equipment and storage medium
CN109993733A (en) * 2019-03-27 2019-07-09 上海宽带技术及应用工程研究中心 Detection method, system, storage medium, terminal and the display system of pulmonary lesions

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104679481B (en) * 2013-11-27 2020-04-28 上海芯豪微电子有限公司 Instruction set conversion system and method
CN107742536B (en) * 2017-10-16 2021-04-06 成都黑杉科技有限公司 Information processing method and device
US10817739B2 (en) * 2019-01-31 2020-10-27 Adobe Inc. Content-aware selection
CN110288019A (en) * 2019-06-21 2019-09-27 北京百度网讯科技有限公司 Image labeling method, device and storage medium
CN110675940A (en) * 2019-08-01 2020-01-10 平安科技(深圳)有限公司 Pathological image labeling method and device, computer equipment and storage medium
CN110837811B (en) * 2019-11-12 2021-01-05 腾讯科技(深圳)有限公司 Method, device and equipment for generating semantic segmentation network structure and storage medium
US11416971B2 (en) * 2019-12-02 2022-08-16 Aizo Systems LLC Artificial intelligence based image quality assessment system
AU2021100350A4 (en) * 2021-01-20 2021-04-15 Wuhan University Method for Predicting Reclamation Potential of Homestead
CN112434684B (en) * 2021-01-27 2021-04-27 萱闱(北京)生物科技有限公司 Image display method, medium, device and computing equipment based on target detection
CN112925938A (en) * 2021-01-28 2021-06-08 上海商汤智能科技有限公司 Image annotation method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359208A (en) * 2018-09-13 2019-02-19 郑津 A kind of distributed method and system of precisely lossless mark image instance
CN109872333A (en) * 2019-02-20 2019-06-11 腾讯科技(深圳)有限公司 Medical image dividing method, device, computer equipment and storage medium
CN109993733A (en) * 2019-03-27 2019-07-09 上海宽带技术及应用工程研究中心 Detection method, system, storage medium, terminal and the display system of pulmonary lesions

Also Published As

Publication number Publication date
CN113343999A (en) 2021-09-03

Similar Documents

Publication Publication Date Title
US10984556B2 (en) Method and apparatus for calibrating relative parameters of collector, device and storage medium
CN107832662B (en) Method and system for acquiring image annotation data
CN111145213A (en) Target tracking method, device and system and computer readable storage medium
US11106933B2 (en) Method, device and system for processing image tagging information
US11403766B2 (en) Method and device for labeling point of interest
CN113240718A (en) Multi-target identification and tracking method, system, medium and computing device
CN113361643A (en) Deep learning-based universal mark identification method, system, equipment and storage medium
CN111124863B (en) Intelligent device performance testing method and device and intelligent device
CN112434684B (en) Image display method, medium, device and computing equipment based on target detection
CN112420150B (en) Medical image report processing method and device, storage medium and electronic equipment
CN113343999B (en) Target boundary recording method and device based on target detection and computing equipment
CN112116585B (en) Image removal tampering blind detection method, system, device and storage medium
CN113823419B (en) Operation process recording method, device, medium and computing equipment
CN113689939B (en) Image storage method, system and computing device for image feature matching
CN113902983B (en) Laparoscopic surgery tissue and organ identification method and device based on target detection model
CN113345046B (en) Movement track recording method, device, medium and computing equipment of operating equipment
CN113887545A (en) Laparoscopic surgical instrument identification method and device based on target detection model
CN113361391A (en) Data augmentation method, system, medium, and computing device based on deep learning
CN114359383A (en) Image positioning method, device, equipment and storage medium
CN113129340B (en) Motion trajectory analysis method and device for operating equipment, medium and computing equipment
CN112559340A (en) Picture testing method, device, equipment and storage medium
CN112559342A (en) Method, device and equipment for acquiring picture test image and storage medium
CN114140408A (en) Image processing method, device, equipment and storage medium
CN111124862A (en) Intelligent equipment performance testing method and device and intelligent equipment
CN112232431A (en) Watermark detection model training method, watermark detection method, system, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100006 office room 787, 7 / F, block 2, xindong'an office building, 138 Wangfujing Street, Dongcheng District, Beijing

Patentee after: Xuanwei (Beijing) Biotechnology Co.,Ltd.

Patentee after: Henan Xuanwei Digital Medical Technology Co.,Ltd.

Address before: 100006 office room 787, 7 / F, block 2, xindong'an office building, 138 Wangfujing Street, Dongcheng District, Beijing

Patentee before: Xuanwei (Beijing) Biotechnology Co.,Ltd.

Patentee before: Henan Xuan Yongtang Medical Information Technology Co.,Ltd.