CN115331008A - End-to-end target detection method based on target probability density graph - Google Patents
End-to-end target detection method based on target probability density graph Download PDFInfo
- Publication number
- CN115331008A CN115331008A CN202210987120.XA CN202210987120A CN115331008A CN 115331008 A CN115331008 A CN 115331008A CN 202210987120 A CN202210987120 A CN 202210987120A CN 115331008 A CN115331008 A CN 115331008A
- Authority
- CN
- China
- Prior art keywords
- target
- probability density
- image
- density map
- detected
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
- G06V10/763—Non-hierarchical techniques, e.g. based on statistics of modelling distributions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an end-to-end target detection method based on a target probability density graph, which comprises the following steps: generating a target probability density map of an image to be detected; acquiring a central point and width and height of a small target area in the image to be detected based on the target probability density map; intercepting a small target area through affine transformation based on the central point and the width and the height of the small target area; and carrying out target detection on the small target area, and outputting the type and the position of the target. The method intercepts the target area by using affine transformation, can realize end-to-end training of the neural network, greatly simplifies the process of network training and the structure of the network, and can lead the model to be output from the original input to the final output by the end-to-end network, thereby providing more space for the model to be automatically adjusted according to data and increasing the overall fitting degree of the model; the invention can reduce the missing rate and the false detection rate of the small target on the premise of not reducing the detection accuracy rate of the large target.
Description
Technical Field
The invention relates to the technical field of target detection, in particular to an end-to-end target detection method based on a target probability density graph.
Background
In recent years, object detection in computer vision has been used in more and more fields of engineering and academia, so that the object detection has been developed at a high speed. Generally, the detected target is often a large target clearly visible to the naked eye, and as application scenes are continuously abundant and diversified, small target detection becomes more and more important. There are two main ways that the business community defines small targets: one is relative size definition, e.g., a target having a length and width that is 0.1 times the image length and width may be considered a small target; the other is absolute size definition, which defines a target size smaller than 32 x 32 pixels as a small target.
Currently, target detection is mostly based on a data-driven neural network method. According to the method, a large number of target detection labels are firstly manufactured, then the labels are used as true values and input into a neural network for training, and finally the trained network is used for detecting targets in a specific scene. At present, a plurality of target detection methods based on a neural network can quickly and effectively detect a large target, but the small target often has the conditions of missing detection and false detection, so that the development of a detection method for the small target becomes very important.
The main idea of the existing small target detection method based on the neural network is to change a small target into a large target, and the method mainly comprises two modes: one is relatively large, and is usually to divide the image into a plurality of small images, so that the small images become large targets, and each small image is detected by using a trained neural network; another way is to feature a small target with more sharp features similar to a large target. The small target as an input is subjected to operations such as scaling and pooling in the neural network, and the features of the small target are gradually diluted, so that a plurality of scholars acquire the features of the small target in each dimension by using a pyramid mode and the like to perform feature fusion reinforcement.
The former method is divided into two methods, the first method adopts a mode of image uniform segmentation, has the main disadvantage that a large amount of computing resources are consumed, and is a rough full-image searching mode; the second method is to initially locate the target area, then intercept the area and send it to the target detector for target detection, and the method splits the training of the neural network, and can not achieve end-to-end training. In addition, the detection of small targets by feature enhancement methods is very limited.
Disclosure of Invention
In view of this, the present invention provides an end-to-end target detection method based on a target probability density map, so as to reduce the missing rate and the false detection rate of a small target on the premise of not reducing the detection accuracy rate of a large target.
The invention discloses an end-to-end target detection method based on a target probability density graph, which comprises the following steps:
step 1: generating a target probability density map of an image to be detected;
step 2: acquiring the central point and the width and the height of a small target area in the image to be detected based on the target probability density map;
and step 3: intercepting a small target area through affine transformation based on the central point and the width and the height of the small target area;
and 4, step 4: and carrying out target detection on the small target area, and outputting the type and the position of the target.
Further, if no small target exists in the image to be detected in the step 2, inputting a new image as the image to be detected, and executing the steps 1 to 2 again until the small target exists in the image to be detected;
the step 4 further comprises:
and carrying out target detection on the image to be detected by using a single-stage detection network, and outputting the type and the position of the detected target.
Further, the target probability density map is:
wherein σ i Is standard deviation of Gaussian function, x k Is the center of the target, δ (x-x) k ) The density points are represented by the number of density points,denotes the Gaussian kernel, δ (x-x) k ) Andconvolving to obtain a target probability density map D (x), k =1,2, \ 8230, wherein N represents the sequence number of the target; i, j respectively represent the ith class and the jth target in the ith class, H i And W i Respectively representing the average height and width of the ith class object, h ij And w ij Representing the average height and width of the ith class and the jth target, respectively, η is used to balance the contribution of the overall and individual sizes of the classes to the filter parameter σ.
Further, the step 1 punishs a background area in the image to be detected through a loss function:
therein, loss density I =1,2, \ 8230for the loss function of the neural network, N denotes the sequence number of the input image, j =0,1, \8230, M i Background pixel number, D (X), representing input ith image i (ii) a Θ) represents the input X i Theta represents a parameter of the target probability density map generation model, D i (j) And =0 indicates that the j-th pixel point of the generated ith target probability density map is a background point, β is a penalty term coefficient, and W and H indicate the width and height of the target probability density map, so that the more background points are, the greater the penalty factor β is.
Further, the step 2 comprises:
step 21: performing non-overlapping traversal on the target probability density map by using a sliding window, comparing pixel values in the sliding window with a preset threshold value, and assigning values to the pixel values in the sliding window based on the comparison result;
step 22: clustering the target probability density graph subjected to assignment;
step 23: and calculating the gravity center of each clustered target area by adopting a gravity center method.
Further, the step 21 includes:
and taking the average value of all target sizes in the size of the sliding window in the training process, comparing the pixel average value m in each sliding window with a preset threshold value threshold, and if m is smaller than the threshold value threshold, setting all pixel values in the sliding window to be 0, otherwise, setting the pixel values to be m.
Further, the step 22 includes:
after threshold processing using DBSCAN AlgorithmIf the size of the sliding window is w, clustering is performed on the regions of the target probability density map other than 0 s ×h s Then the cluster scan radius is 2 × max (w) s ,h s ) After clustering, the regions classified into one class represent a target region to be segmented;
the step 23 includes:
calculating the gravity center (x) of each clustered target region R by using a gravity center method c ,y c )。
Further, the center of gravity (x) c ,y c ) The calculation formula of (2) is as follows:
wherein d is ij Representing the probability density value in R, i.e. the pixel value of the target probability density map; if the size of the target region R is w × h, the calculation formula is:
center of gravity (x) c ,y c ) The center of the new region is changed, and finally the region R = (x) to be cut is obtained c ,y c W, h), parameter pairs d in R ij Can be conducted.
Further, the step 3 comprises:
adopting affine transformation to carry out target area interception, and expressing the coordinates of an area R to be intercepted as (x) o ,y o ) The truncated target area coordinates are expressed as (x) t ,y t ) The affine transformation is:
in affine transformation, only (x) o ,y o ) Unknown, the coordinates of the target area and the transformation matrix are known, and the analytic expressions of the target probability density graph to-be-cut area and the target area after cutting can be established through affine transformation, so that the training process can be continuous.
Further, (x) t ,y t ) All are integer coordinates, and are inversely transformed to obtain (x) o ,y o ) And if the image data is not all integers, pixel values corresponding to the coordinates of the inversely transformed image to be detected are obtained through bilinear interpolation.
Due to the adoption of the technical scheme, the invention has the following advantages: the invention provides a small target detection method aiming at target area positioning based on a target probability density map, which can position the area where the target exists, and reduce the missing rate and the false detection rate of the small target on the premise of not reducing the detection accuracy rate of the large target; the method intercepts the target area by using affine transformation, can realize end-to-end training of the neural network, greatly simplifies the process of network training and the structure of the network, and can lead the model to be output from the original input to the final output by the end-to-end network, thereby providing more space for the model to be automatically adjusted according to data and increasing the overall fitting degree of the model; the invention can be seamlessly connected with other neural networks, including different detection networks and neural networks with other tasks such as target identification, classification and the like; the invention can find application scenes in multiple fields of military, civil and the like.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments described in the embodiments of the present invention, and it is obvious for those skilled in the art that other drawings may be obtained according to the drawings.
Fig. 1 is a schematic flow chart of a small target detection method according to an embodiment of the present invention;
FIG. 2 (a) is a schematic diagram of an original input image according to an embodiment of the present invention;
FIG. 2 (b) is a schematic representation of the corresponding target probability density of FIG. 2 (a);
FIG. 3 is a schematic diagram illustrating a target area positioning process according to an embodiment of the present invention;
FIG. 4 (a) is a schematic diagram illustrating the effect of FIG. 2 (b) after threshold processing;
FIG. 4 (b) is a schematic diagram illustrating the clustering effect of FIG. 4 (a);
fig. 5 is a schematic flowchart of an end-to-end target detection method based on a target probability density map according to an embodiment of the present invention.
Detailed Description
The invention will be further described with reference to the drawings and examples, it being understood that the examples described are only some of the examples and are not intended to be exhaustive. All other embodiments available to those of ordinary skill in the art are intended to be within the scope of the embodiments of the present invention.
The invention provides an embodiment of an end-to-end target detection method based on a target probability density map, which comprises the steps of target probability density map generation, target area positioning, target area interception, target detection and the like. The flow chart is shown in figure 1.
Generating a target probability density map: the object probability density map used to generate the objects shows a higher probability that a small object is present at a brighter place in the map. The left image is the input original image, and the right image is the generated target probability density map as shown in fig. 2 (a), and the brighter the right image, the higher the probability that a small target (human) exists as shown in fig. 2 (b).
Positioning a target area: and based on the generated target probability density map, positioning the small target area through threshold filtering and clustering to obtain the central point and the width and the height of the small target area.
Intercepting a target area: and based on the small target area, intercepting the small target area through affine transformation and sending the small target area to a target detector.
Target detection: and carrying out target detection on the input small target area, and outputting the type and the position of the detected target.
S1, generating a target probability density graph
The generation of the target probability density map firstly needs to go through the training of the network, and the CSRNet method is used for reference. CSRNet was used to generate a population distribution density map. The CSRNet network is mainly divided into a front-end network and a back-end network, the front-end network adopts VGG-16 with all connection layers removed, and the increase of the number of convolution layers leads to the reduction of the size of the output image and the difficulty of generating a density map. Therefore, the CSRNet adopts the hole convolution neural network as a rear-end network, expands a perception domain while keeping the resolution ratio, and generates a high-quality crowd distribution density map.
In the training process, the target probability density map performs Gaussian blur on each density point on the original density map to improve the robustness of the predicted target, and the formula is as follows:
wherein x is k Is the center of the target, δ (x-x) k ) The density points are represented by a number of density points,denotes the Gaussian kernel, δ (x-x) k ) And withThe convolution yields a probability density map D (x). k =1,2, \ 8230, N denotes the serial number of the target.
The value of sigma is dynamically changed according to the size of the target, and the assigned formula is as follows:
wherein, i, j respectively represent the ith category and the jth target in the ith category, H i And W i Respectively represent the average height and width, h, of the ith class of object ij And w ij The average height and width of the ith target of the ith category are respectively expressed, eta is used for balancing the contribution degree of the extraction size and the individual size of the category to the filter parameter sigma, and the experiment shows that 0.7 is suitable. In this way, when the target size is small, the region range of the small target after gaussian filtering can be increased through the overall size of the type, and when the target size is large, the region range after gaussian filtering can be relatively reduced, thereby playing a balance role. By filtering the dynamic variation of kernel size, where larger values of the filtered probability density map are obtained means that there is a higher probability of small objects being present.
The CSRNet method solves the problem of dense target counting, the method solves the problem of small target detection, and the method faces the problem of huge occupation ratio of a background area without targets, so a loss function punishs the background area as follows:
where i =1,2, \8230, N denotes the serial number of the input image, j =0,1, \8230, M i And indicating the serial number of the background pixel point of the input ith image. D (X) i (ii) a Θ) represents input X i Theta denotes a parameter of the probability density map generation model, D i (j) And =0 indicates that the j-th pixel point of the generated i-th probability density map is a background point. Beta is a penalty factor, W and H represent the width and height of the probability density map, so the more background points, the greater the penalty factor beta.
S2, positioning the target area
This step will perform the target region based on the generated target probability density map in S1And (4) positioning the domain. As can be seen from fig. 2 (b), the brighter regions of the object probability density map indicate a higher probability of the presence of an object. In order to further reduce the influence of background and noise on target detection, some measures are taken to process the target probability density map to obtain a target area. Firstly, a sliding window is used for carrying out non-overlapping traversal on a density graph, the size of the sliding window is the average value of all target sizes in the training process, the pixel average value m in each sliding window is compared with a preset threshold value threshold, if m is smaller than the threshold value, all pixel values in the sliding window are set to be 0, otherwise, the pixel average value m is set to be m. Then, clustering the regions which are not 0 in the density map after threshold processing by using a DBSCAN algorithm, and if the size of a sliding window is w s ×h s Then the cluster scan radius is 2 × max (w) s ,h s ) The regions classified as one after clustering represent a target region to be segmented. Then, the gravity center (x) of each clustered target region R is calculated by using a gravity center method c ,y c ) The formula is as follows:
wherein, d ij Representing the probability density value in R, i.e. the pixel value of the target probability density map. If the size of the target region R is w × h, the calculation formula is:
such that the center of gravity (x) c ,y c ) The center of the new area is changed, and finally the area R = (x) to be cut is obtained c ,y c W, h), parameter pairs d in R ij Can be conducted. Target area localization is shown in fig. 3, and target area localization effect maps are shown in fig. 4 (a) and 4 (b).
S3, intercepting a target area
And S2, obtaining a target area, cutting the target area out, and sending the cut target area to a target detector for target detection. The traditional region interception is performed by using a template, and the method causes the training of the whole network to be non-conductive, so that the method adopts affine transformation to perform the target region interception. The coordinates of the region R to be intercepted are expressed as (x) o ,y o ) The truncated target area coordinates are expressed as (x) t ,y t ) Affine transformation is:
affine transformation of only (x) o ,y o ) The target region to be clipped and the target region after clipping can be established by affine transformation, so that the training process can be continuous.
Is known as (x) t ,y t ) All are integer coordinates, inverse transformed to (x) o ,y o ) Since not all the pixels are integers, it is necessary to obtain pixel values corresponding to coordinates of the original image after the inverse transform by bilinear interpolation.
S4, target detection
The target detection is divided into two branches, as shown in fig. 5, wherein the upper branch adopts a single-stage detection network to perform target detection on the original input image, and the detection often fails to detect or falsely detects some small targets, but the detection on non-small targets can obtain a good detection result. The next branch is specially used for detecting small targets, and the same single-stage detection network is used. And finally, synthesizing the two detection results to obtain a final detection result.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.
Claims (10)
1. An end-to-end target detection method based on a target probability density graph is characterized by comprising the following steps:
step 1: generating a target probability density map of an image to be detected;
and 2, step: acquiring the central point and the width and the height of a small target area in the image to be detected based on the target probability density map;
and step 3: intercepting a small target area through affine transformation based on the central point and the width and the height of the small target area;
and 4, step 4: and carrying out target detection on the small target area, and outputting the type and the position of the target.
2. The method according to claim 1, wherein if there is no small target in the image to be detected in step 2, inputting a new image as the image to be detected, and re-executing steps 1 to 2 until there is a small target in the image to be detected;
the step 4 further comprises:
and carrying out target detection on the image to be detected by using a single-stage detection network, and outputting the type and the position of the detected target.
3. The method of claim 1, wherein the target probability density map is:
wherein σ i Is standard deviation of Gaussian function, x k Is the center of the target, δ (x-x) k ) The density points are represented by a number of density points,denotes the Gaussian kernel, δ (x-x) k ) And withConvolving to obtain a target probability density map D (x), wherein k =1, 2.. And N represents the serial number of the target; i, j respectively represent the ith class and the jth target in the ith class, H i And W i Respectively representing the average height and width of the ith class object, h ij And w ij Representing the average height and width of the ith class and jth object, respectively, and η is used to balance the contribution of the overall and individual sizes of the classes to the filter parameter σ.
4. The method according to claim 3, wherein step 1 penalizes the background region in the image to be detected by a loss function:
therein, loss density For the loss function of the neural network, i =1,2 i Background pixel number, D (X), representing input ith image i (ii) a Θ) represents the input X i Theta represents a parameter of the target probability density map generation model, D i (j) And =0 indicates that the jth pixel point of the generated ith target probability density map is a background point, β is a penalty term coefficient, and W and H indicate the width and height of the target probability density map, so that the more background points are, the greater the penalty coefficient β is.
5. The method of claim 1, wherein the step 2 comprises:
step 21: performing non-overlapping traversal on the target probability density map by using a sliding window, comparing pixel values in the sliding window with a preset threshold value, and assigning values to the pixel values in the sliding window based on the comparison result;
step 22: clustering the target probability density graph subjected to assignment;
step 23: and calculating the gravity center of each clustered target area by adopting a gravity center method.
6. The method according to claim 5, wherein said step 21 comprises:
in the training process, the size of the sliding window is the average value of all target sizes, the pixel average value m in each sliding window is compared with a preset threshold value threshold, if m is smaller than the threshold value, all pixel values in the sliding window are set to be 0, otherwise, all pixel values are set to be m.
7. The method of claim 6, wherein the step 22 comprises:
clustering the regions which are not 0 in the target probability density graph after threshold processing by using a DBSCAN algorithm, and if the size of a sliding window is w s ×h s Then the cluster scan radius is 2 × max (w) s ,h s ) The regions classified into one class after clustering represent a target region to be segmented;
the step 23 includes:
calculating the gravity center (x) of each clustered target region R by using a gravity center method c ,y c )。
8. Method according to claim 7, characterized in that the centre of gravity (x) c ,y c ) The calculation formula of (2) is as follows:
wherein d is ij Representing the probability density values in R, i.e. the pixel values of the target probability density map; if the size of the target region R is w × h, the calculation formula is:
center of gravity (x) c ,y c ) Changing to the center of the new region, and finally obtaining the region R = (x) to be cut c ,y c W, h), parameter pairs d in R ij Can be conducted.
9. The method of claim 1, wherein step 3 comprises:
adopting affine transformation to carry out target area interception, and expressing the coordinate of an area R to be intercepted as (x) o ,y o ) The truncated target area coordinates are expressed as (x) t ,y t ) Affine transformation is:
x t =0,1,...,w-1
y t =0,1,...,h-1
in affine transformation, only (x) o ,y o ) Unknown, the coordinates of the target area and the transformation matrix are known, and the analytic expressions of the target probability density graph to-be-cut area and the target area after cutting can be established through affine transformation, so that the training process can be continuous.
10. The method of claim 9, wherein (x) t ,y t ) All are integer coordinates, inverse transformed to (x) o ,y o ) Not all integers are needed, and the pixel values corresponding to the coordinates of the inversely transformed image to be detected are obtained through bilinear interpolation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210987120.XA CN115331008A (en) | 2022-08-17 | 2022-08-17 | End-to-end target detection method based on target probability density graph |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210987120.XA CN115331008A (en) | 2022-08-17 | 2022-08-17 | End-to-end target detection method based on target probability density graph |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115331008A true CN115331008A (en) | 2022-11-11 |
Family
ID=83923802
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210987120.XA Pending CN115331008A (en) | 2022-08-17 | 2022-08-17 | End-to-end target detection method based on target probability density graph |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115331008A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116681701A (en) * | 2023-08-02 | 2023-09-01 | 青岛市妇女儿童医院(青岛市妇幼保健院、青岛市残疾儿童医疗康复中心、青岛市新生儿疾病筛查中心) | Children lung ultrasonic image processing method |
-
2022
- 2022-08-17 CN CN202210987120.XA patent/CN115331008A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116681701A (en) * | 2023-08-02 | 2023-09-01 | 青岛市妇女儿童医院(青岛市妇幼保健院、青岛市残疾儿童医疗康复中心、青岛市新生儿疾病筛查中心) | Children lung ultrasonic image processing method |
CN116681701B (en) * | 2023-08-02 | 2023-11-03 | 青岛市妇女儿童医院(青岛市妇幼保健院、青岛市残疾儿童医疗康复中心、青岛市新生儿疾病筛查中心) | Children lung ultrasonic image processing method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107767405B (en) | Nuclear correlation filtering target tracking method fusing convolutional neural network | |
Pan et al. | Object detection based on saturation of visual perception | |
CN108537239B (en) | Method for detecting image saliency target | |
CN112381030B (en) | Satellite optical remote sensing image target detection method based on feature fusion | |
CN111695514A (en) | Vehicle detection method in foggy days based on deep learning | |
CN108830842B (en) | Medical image processing method based on angular point detection | |
CN111160407A (en) | Deep learning target detection method and system | |
CN110334584B (en) | Gesture recognition method based on regional full convolution network | |
CN113808166B (en) | Single-target tracking method based on clustering difference and depth twin convolutional neural network | |
CN111274964B (en) | Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle | |
CN113888461A (en) | Method, system and equipment for detecting defects of hardware parts based on deep learning | |
CN108537816A (en) | A kind of obvious object dividing method connecting priori with background based on super-pixel | |
CN114092793B (en) | End-to-end biological target detection method suitable for complex underwater environment | |
CN107423771B (en) | Two-time-phase remote sensing image change detection method | |
CN107194917B (en) | DAP and ARE L M-based on-orbit SAR image change detection method | |
CN106846321B (en) | Image segmentation method based on Bayesian probability and neural network | |
CN115331008A (en) | End-to-end target detection method based on target probability density graph | |
CN111275732A (en) | Foreground object image segmentation method based on deep convolutional neural network | |
Lou et al. | Smoke root detection from video sequences based on multi-feature fusion | |
CN107170004B (en) | Image matching method for matching matrix in unmanned vehicle monocular vision positioning | |
CN117036202A (en) | Remote sensing image type imbalance-oriented hybrid enhancement method and system | |
CN107230201B (en) | Sample self-calibration ELM-based on-orbit SAR (synthetic aperture radar) image change detection method | |
CN113095185B (en) | Facial expression recognition method, device, equipment and storage medium | |
CN111242960A (en) | Image segmentation method based on complex network theory | |
CN114240991B (en) | Instance segmentation method of RGB image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |