CN111985346A - Local shielding target tracking method based on structured sparse characteristic - Google Patents

Local shielding target tracking method based on structured sparse characteristic Download PDF

Info

Publication number
CN111985346A
CN111985346A CN202010733321.8A CN202010733321A CN111985346A CN 111985346 A CN111985346 A CN 111985346A CN 202010733321 A CN202010733321 A CN 202010733321A CN 111985346 A CN111985346 A CN 111985346A
Authority
CN
China
Prior art keywords
target
structured
sparse
tracking
weighting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010733321.8A
Other languages
Chinese (zh)
Inventor
王堃
王铭宇
吴晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Star Innovation Technology Co ltd
Original Assignee
Chengdu Star Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Star Innovation Technology Co ltd filed Critical Chengdu Star Innovation Technology Co ltd
Priority to CN202010733321.8A priority Critical patent/CN111985346A/en
Publication of CN111985346A publication Critical patent/CN111985346A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a local shielding target tracking method based on a structured sparse characteristic, and belongs to the field of image processing. The method aims to solve the problem of puzzlement brought to target tracking by local shielding. The invention comprises the following steps: firstly, constructing a structured sparse characteristic image template dictionary; secondly, structured sparse grouping weighting; then, extracting corresponding sparse characteristics; and finally, tracking the shielded target. The method and the device clearly determine the position of the shielding target from the structured blocks, extract the corresponding sparse characteristic from the structured blocks, clearly track the target, and further realize the tracking of the target.

Description

Local shielding target tracking method based on structured sparse characteristic
Technical Field
The invention relates to the field of image processing, in particular to a local shielding target tracking method based on a structured sparse characteristic.
Background
A method for tracking a local shielding target based on a structured sparse characteristic is a method in the field of computer vision, the aim of tracking a video target is to dynamically estimate the position appearing in a video sequence on the premise of giving an initial position, and the tracked target can also be regarded as searching a candidate image block which is most similar to a target template in the video sequence.
Currently, the difficulties in the field of target tracking can be mainly divided into three categories: illumination, occlusion, and noise. In addition, the support vector tracking method, the integrated tracking method, and the incremental learning-based target tracking method are conventional target tracking methods. Once trained, the support vector tracking method remains unchanged, and cannot adapt to new changes in the target motion process. The integrated tracking method is susceptible to background and noise interference. The target tracking method based on incremental learning is also deficient in the change of illumination.
Disclosure of Invention
The invention aims to provide a local shielding target tracking method based on a structured sparse characteristic, and solves the problem of puzzlement brought to target tracking by local shielding.
The invention solves the technical problem, and adopts the technical scheme that: the local shielding target tracking method based on the structured sparse characteristic comprises the following steps:
step 1, constructing a structured sparse characteristic image template dictionary;
step 2, structured sparse grouping weighting;
step 3, extracting corresponding sparse characteristics;
and 4, tracking the shielded target.
Further, the step 1 specifically comprises the following steps:
step 101, extracting a target image template set a from the periphery of a target initial position in a first frame of a video to be tracked, where a is [ a ═ a [1,A2,...Ai...,An]Wherein n is the total number of the target image templates, AiI is more than or equal to 1 and less than or equal to n for the ith image template in the target image template set;
102, carrying out template structured blocking processing on each target image template in the target image template set, wherein each A isiPartitioning to obtain n partitions;
step 103, adding a candidate image template C and partitioning according to the same rule as the template structured partitioning processing, so as to obtain C ═ C1,c2,...ci,...cn]∈Rh×NDenotes dividing the candidate sample into N blocks, each block being h in length, where ciAs the ith of the candidate image templateAnd (5) partitioning.
Further, the sparse characteristic coefficient of the sample block on the dictionary is obtained by solving the following optimization equation:
Figure BDA0002604034780000021
wherein Z ∈ Rh×(n×N)Is a block dictionary, diRepresenting the characteristic coefficient of the ith block of the candidate image template on the dictionary, wherein the sparse characteristic coefficient of the candidate image template is represented by D ═ D1,d2,...,di...dN]To indicate.
Further, the step 2 specifically comprises the following steps:
step 201, grouping the sparse characteristic coefficients according to the source to obtain
Figure BDA0002604034780000022
Wherein
Figure BDA0002604034780000023
Representing coefficients represented by patches in the kth image template;
step 202, obtaining the weighting coefficient v of each candidate block on different partial image templates through the grouped coefficients by a weighting formulaiThe weighting formula is as follows:
Figure BDA0002604034780000024
wherein v isiAnd f represents a weight coefficient of the ith block of the candidate image, and f represents a regularization coefficient.
Further, when the weighting coefficient v of each candidate block on different partial image templates is obtainediIn the process, the weighting coefficients corresponding to the infrequent blocks are not more than 0.6, the weighting coefficients of the frequently occurring blocks are all different from 1 by a specified difference, and the range of the difference is 0-0.1.
Further, step 3 specifically includes the following steps:
step 301, forming a matrix V epsilon R by all the weighting coefficientsN×N
Step 302, performing diagonalization on the matrix, wherein the diagonalization formula is as follows:
g=diag(V);
and 303, extracting the sparse characteristics of the diagonal positions in the matrix after the diagonalization processing, wherein in the extraction process of the sparse characteristics, the greater the element values on the diagonal of the V matrix of the candidate image are, the greater the similarity with the target of the candidate image is.
Further, in step 4, when tracking the occluded target, assuming that the lower half of the target is occluded and the upper half is not occluded, after extracting the corresponding sparse feature, the weight coefficient of the portion of the target that is not occluded is obtained by a predetermined difference from 1, and the weight coefficient of the portion that is occluded is obtained by less than 0.6.
The method has the advantages that through the method for tracking the local shielding target based on the structured sparse characteristic, the structured sparse characteristic template dictionary is firstly constructed, the sample size of the dictionary is small, and the sparse characteristic extraction speed is high in the later period. And a sparse characteristic template dictionary is constructed, so that subsequent analysis researches such as classification and clustering are facilitated. Secondly, weighting the structured sparse groups, and using two weighting coefficients as two predetermined boundary lines to assist in judging whether each block is a shielding part. The corresponding sparse characteristic is extracted by utilizing the V matrix, and whether the block is shielded or not can be visually and accurately judged by numerical values in the matrix, so that the target can be tracked. In addition, the invention determines the position of the shielding target from the structured blocks, extracts the corresponding sparse characteristic from the structured blocks, and determines the tracking target, thereby solving the problem of the target tracking caused by local shielding.
Drawings
FIG. 1 is a flow chart of a local occlusion target tracking method based on a structured sparse property according to the present invention;
FIG. 2 is a schematic diagram of structured partitioning processing performed on a target set A according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of structured partitioning processing of added candidate samples according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating sparse property coefficient grouping according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating sparse property coefficient grouping weighting according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a characteristic feature extraction process according to an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is described in detail below with reference to the accompanying drawings and embodiments.
The invention provides a local shielding target tracking method based on a structured sparse characteristic, a flow chart of which is shown in figure 1, wherein the method comprises the following steps:
step 1, constructing a structured sparse characteristic image template dictionary.
And 2, structuring sparse grouping weighting.
And 3, extracting corresponding sparse characteristics.
And 4, tracking the shielded target.
In the above method, in order to better structure the sparse characteristic image template dictionary, the constructing step 1 may specifically include the following steps:
step 101, extracting a target image template set a from the periphery of a target initial position in a first frame of a video to be tracked, where a is [ a ═ a [1,A2,...Ai...,An]Wherein n is the total number of the target image templates, AiI is more than or equal to 1 and less than or equal to n for the ith image template in the target image template set.
102, carrying out template structured blocking processing on each target image template in the target image template set, wherein each A isiAnd partitioning to obtain n partitions.
Step 103, adding a candidate image template C and partitioning according to the same rule as the template structured partitioning processing, so as to obtain C ═ C1,c2,...ci,...cn]∈Rh×NDenotes dividing the candidate sample into N blocks, each block being h in length, where ciIs the ith block of the candidate image template.
In addition, the sparse characteristic coefficient of the sample block on the dictionary can be obtained by solving the following optimization equation:
Figure BDA0002604034780000031
wherein Z ∈ Rh×(n×N)Is a block dictionary, diRepresenting the characteristic coefficient of the ith block of the candidate image template on the dictionary, wherein the sparse characteristic coefficient of the candidate image template is represented by D ═ D1,d2,...,di...dN]To indicate.
It should be noted that, in order to perform structured sparse grouping weighting, step 2 may specifically include the following steps:
step 201, grouping the sparse characteristic coefficients according to the source to obtain
Figure BDA0002604034780000041
Wherein
Figure BDA0002604034780000042
Representing coefficients represented by patches in the kth image template;
step 202, obtaining the weighting coefficient v of each candidate block on different partial image templates through the grouped coefficients by a weighting formulaiWherein, the weighting formula is as follows:
Figure BDA0002604034780000043
wherein v isiAnd f represents a weight coefficient of the ith block of the candidate image, and f represents a regularization coefficient.
When the weighting coefficient v of each candidate block on different partial image templates is obtainediIn the process, the weighting coefficients corresponding to the infrequent blocks are not more than 0.6, the weighting coefficients of the frequently occurring blocks are all different from 1 by a specified difference, and the range of the difference is 0-0.1.
In order to extract the corresponding sparse characteristics, step 3 specifically includes the following steps:
step 301, forming a matrix V epsilon R by all the weighting coefficientsN×N
Step 302, performing diagonalization on the matrix, wherein the diagonalization formula is as follows:
g=diag(V);
and 303, extracting the sparse characteristics of the diagonal positions in the matrix after the diagonalization processing, wherein in the extraction process of the sparse characteristics, the greater the element values on the diagonal of the V matrix of the candidate image are, the greater the similarity with the target of the candidate image is.
In step 4, when the occluded target is tracked, assuming that the lower half of the target is occluded and the upper half is not occluded, after the extraction of the corresponding sparse feature, the weight coefficient of the portion of the target that is not occluded is different from 1 by a predetermined difference, and the weight coefficient of the portion that is occluded is less than 0.6.
Examples
The method for tracking the local shielding target based on the structured sparse characteristic provided by the embodiment specifically comprises the following steps:
1. and constructing a structured sparse characteristic template dictionary Z.
1.1 first extract a set of target image templates a from around the target initial position in the first frame of the video to be tracked, a ═ a1,A2,A3,...,An]。
1.2 pairs of AiStructured blocking is carried out, each AiBlock to obtain a plurality of Bi. A schematic diagram of the structured blocking process performed on the target set a is shown in fig. 2.
1.3 Add candidate sample C and block according to the same rules as template structuring process, as shown in FIG. 3.
C=[c1,c2,...,cn]∈Rh×NMeaning that the candidate sample is divided into N blocks, each block being h in length. Obtaining a sparse characteristic coefficient of the sample block on the dictionary by solving an optimization equation, wherein the optimization equation is as follows:
Figure BDA0002604034780000051
wherein Z ∈ Rh×(n×N)Is a block dictionary, ciRepresenting the ith block of the candidate image template, diRepresenting the characteristic coefficient of the ith block of the candidate image template on the dictionary, wherein the characteristic coefficient of the candidate image template can be represented by D ═ D1,d2,...,dN]To indicate. Unlike the ordinary sparsity property, the structured sparsity property coefficient contains more structural information.
2. Structured sparse packet weighting.
And 2.1 grouping and weighting sparse coefficients of the image. The sparse property coefficient d has been obtained by solving an optimization equationiThe coefficients are grouped according to source in the form as shown in fig. 3 to obtain
Figure BDA0002604034780000052
Wherein
Figure BDA0002604034780000053
Representing the coefficients represented by the blocks in the kth image template, as shown in fig. 4.
The grouping coefficient obtains the weighting coefficient v of each candidate block on different partial image templates through a weighting formulaiAs shown in fig. 5, wherein the weighting formula is as follows:
Figure BDA0002604034780000054
wherein v isiAnd f represents a weight coefficient of the ith block of the candidate image, and f represents a regularization coefficient.
2.2 the less frequently occurring blocks are usually the parts where large deformations occur or where occlusion occurs, the weighting factor does not exceed 0.6.
2.3 the weighting coefficients of the frequently occurring blocks all take numbers close to 1.
3. And extracting corresponding sparse characteristics.
Step 2, aiming at the single structural block, step 3 extracts the corresponding sparse characteristic on the basis of step 2, and strengthens the structural information.
3.1 step 2.1 all weighting coefficients form a matrix V e RN×N
3.2 diagonalize the matrix in 3.1 by a diagonalizing formula, wherein the diagonalizing formula is as follows:
g=diag(V)
3.3 extraction process of characteristic of corresponding position as shown in fig. 6, it can be found that the larger the value of the element on the diagonal line of the V matrix of the candidate image, the greater the similarity with its target.
4. And tracking the shielded target.
Assume that the lower half of the target is occluded and the upper half is not. After the extraction of the corresponding features, the weight coefficient of the unoccluded part of the target is close to 1, and the weight coefficient of the occluded part is lower than 0.6, so that the partially occluded target is enough for target tracking according to the unoccluded part.

Claims (7)

1. The local occlusion target tracking method based on the structured sparse characteristic is characterized by comprising the following steps of:
step 1, constructing a structured sparse characteristic image template dictionary;
step 2, structured sparse grouping weighting;
step 3, extracting corresponding sparse characteristics;
and 4, tracking the shielded target.
2. The method for tracking the local occlusion target based on the structured sparse characteristic as claimed in claim 1, wherein the step 1 specifically comprises the following steps:
step 101, extracting a target image template set a from the periphery of a target initial position in a first frame of a video to be tracked, where a is [ a ═ a [1,A2,...Ai...,An]Wherein n is the total number of the target image templates, AiI is more than or equal to 1 and less than or equal to n for the ith image template in the target image template set;
102, carrying out template structured blocking processing on each target image template in the target image template set, wherein each A isiPartitioning to obtain n partitions;
step 103, adding a candidate image template C and partitioning according to the same rule as the template structured partitioning processing, so as to obtain C ═ C1,c2,...ci,...cn]∈Rh×NDenotes dividing the candidate sample into N blocks, each block being h in length, where ciIs the ith block of the candidate image template.
3. The method for tracking the local shielding target based on the structured sparse characteristic as claimed in claim 2, wherein the sparse characteristic coefficient of the sample block on the dictionary is obtained by solving the following optimization equation:
Figure FDA0002604034770000011
wherein Z ∈ Rh×(n×N)Is a block dictionary, diRepresenting the characteristic coefficient of the ith block of the candidate image template on the dictionary, wherein the sparse characteristic coefficient of the candidate image template is represented by D ═ D1,d2,...,di...dN]To indicate.
4. The method for tracking the locally occluded target based on the structured sparsity feature of claim 3, wherein the step 2 specifically comprises the following steps:
step 201, grouping the sparse characteristic coefficients according to sources to obtain
Figure FDA0002604034770000012
Wherein
Figure FDA0002604034770000013
Representing coefficients represented by patches in the kth image template;
step 202, obtaining the weighting coefficient v of each candidate block on different partial image templates through the grouped coefficients by a weighting formulaiThe weighting formula is as follows:
Figure FDA0002604034770000014
wherein v isiAnd f represents a weight coefficient of the ith block of the candidate image, and f represents a regularization coefficient.
5. The method for tracking the local occlusion target based on the structured sparse characteristic as claimed in claim 4, wherein a weighting coefficient v of each candidate block on different partial image templates is obtainediIn the process, the weighting coefficients corresponding to the infrequently occurring blocks do not exceed 0.6, and the weighting coefficients of the frequently occurring blocks all differ from 1 by a specified difference, which is in the range of 0-0.1.
6. The method for tracking the locally occluded target based on the structured sparsity feature of claim 4, wherein the step 3 specifically comprises the following steps:
step 301, forming a matrix V epsilon R by all the weighting coefficientsN×N
Step 302, performing diagonalization on the matrix, wherein a diagonalization formula is as follows:
g=diag(V);
and 303, extracting the sparse characteristics of the diagonal positions in the matrix after the diagonalization processing, wherein in the extraction process of the sparse characteristics, the greater the element values on the diagonal of the V matrix of the candidate image are, the greater the similarity with the target of the candidate image is.
7. The method for tracking the locally-occluded target based on the structured sparse characteristic according to any one of claims 1 to 6, wherein in step 4, when the occluded target is tracked, assuming that the lower half of the target is occluded and the upper half is not occluded, after the corresponding sparse characteristic is extracted, the unoccluded part of the target obtains a weighting coefficient different from 1 by a predetermined difference, and the obtained weighting coefficient of the occluded part is lower than 0.6.
CN202010733321.8A 2020-07-27 2020-07-27 Local shielding target tracking method based on structured sparse characteristic Pending CN111985346A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010733321.8A CN111985346A (en) 2020-07-27 2020-07-27 Local shielding target tracking method based on structured sparse characteristic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010733321.8A CN111985346A (en) 2020-07-27 2020-07-27 Local shielding target tracking method based on structured sparse characteristic

Publications (1)

Publication Number Publication Date
CN111985346A true CN111985346A (en) 2020-11-24

Family

ID=73445804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010733321.8A Pending CN111985346A (en) 2020-07-27 2020-07-27 Local shielding target tracking method based on structured sparse characteristic

Country Status (1)

Country Link
CN (1) CN111985346A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274436A (en) * 2017-06-02 2017-10-20 浙江师范大学 A kind of sparse tracking of the local multitask of the weighting of robustness
CN107392938A (en) * 2017-07-20 2017-11-24 华北电力大学(保定) A kind of sparse tracking of structure based on importance weighting
CN107680120A (en) * 2017-09-05 2018-02-09 南京理工大学 Tracking Method of IR Small Target based on rarefaction representation and transfer confined-particle filtering
CN110648351A (en) * 2019-09-19 2020-01-03 安徽大学 Multi-appearance model fusion target tracking method and device based on sparse representation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274436A (en) * 2017-06-02 2017-10-20 浙江师范大学 A kind of sparse tracking of the local multitask of the weighting of robustness
CN107392938A (en) * 2017-07-20 2017-11-24 华北电力大学(保定) A kind of sparse tracking of structure based on importance weighting
CN107680120A (en) * 2017-09-05 2018-02-09 南京理工大学 Tracking Method of IR Small Target based on rarefaction representation and transfer confined-particle filtering
CN110648351A (en) * 2019-09-19 2020-01-03 安徽大学 Multi-appearance model fusion target tracking method and device based on sparse representation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FANG WANG: "Target tracking based on multiple appearance model fusion with sparse representation", 《PROCEEDINGS OF THE 5TH INTERNATION CONFERENCE ON MULTIMEDIA AND IMAGE PROCESSING》 *
邹甜: "基于低秩稀疏分解的多尺度多模 态运动目标检测方法研究", 《CNKI优秀硕士毕业论文全文库(信息科技辑)》 *

Similar Documents

Publication Publication Date Title
CN109035149B (en) License plate image motion blur removing method based on deep learning
CN107657279B (en) Remote sensing target detection method based on small amount of samples
CN111753828B (en) Natural scene horizontal character detection method based on deep convolutional neural network
CN105069434B (en) A kind of human action Activity recognition method in video
CN111680614A (en) Abnormal behavior detection method based on video monitoring
CN108805223B (en) Seal script identification method and system based on Incep-CapsNet network
CN111046732B (en) Pedestrian re-recognition method based on multi-granularity semantic analysis and storage medium
CN111462068B (en) Bolt and nut detection method based on transfer learning
CN110032952B (en) Road boundary point detection method based on deep learning
CN111738367B (en) Part classification method based on image recognition
CN110472652A (en) A small amount of sample classification method based on semanteme guidance
CN112085765A (en) Video target tracking method combining particle filtering and metric learning
CN113361623A (en) Lightweight CNN (CNN-based network) combined transfer learning medical image classification method
CN108596044B (en) Pedestrian detection method based on deep convolutional neural network
CN110991554B (en) Improved PCA (principal component analysis) -based deep network image classification method
CN112749675A (en) Potato disease identification method based on convolutional neural network
CN115393631A (en) Hyperspectral image classification method based on Bayesian layer graph convolution neural network
CN113963333B (en) Traffic sign board detection method based on improved YOLOF model
CN113033345B (en) V2V video face recognition method based on public feature subspace
CN108388918B (en) Data feature selection method with structure retention characteristics
CN111612803B (en) Vehicle image semantic segmentation method based on image definition
CN116721343A (en) Cross-domain field cotton boll recognition method based on deep convolutional neural network
CN113192076B (en) MRI brain tumor image segmentation method combining classification prediction and multi-scale feature extraction
CN113887428B (en) Deep learning paired model human ear detection method based on context information
CN111985346A (en) Local shielding target tracking method based on structured sparse characteristic

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination