CN109492685A - A kind of target object visible detection method for symmetrical feature - Google Patents

A kind of target object visible detection method for symmetrical feature Download PDF

Info

Publication number
CN109492685A
CN109492685A CN201811288693.3A CN201811288693A CN109492685A CN 109492685 A CN109492685 A CN 109492685A CN 201811288693 A CN201811288693 A CN 201811288693A CN 109492685 A CN109492685 A CN 109492685A
Authority
CN
China
Prior art keywords
candidate frame
group
image
detection
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811288693.3A
Other languages
Chinese (zh)
Other versions
CN109492685B (en
Inventor
程健
郭雪亮
李杨
陈亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology CUMT
China Coal Research Institute CCRI
Original Assignee
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology CUMT filed Critical China University of Mining and Technology CUMT
Priority to CN201811288693.3A priority Critical patent/CN109492685B/en
Publication of CN109492685A publication Critical patent/CN109492685A/en
Application granted granted Critical
Publication of CN109492685B publication Critical patent/CN109492685B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of target object visible detection methods for symmetrical feature, specific steps are as follows: there is the image of target object to carry out processing shooting and obtain multiple foundation characteristics, then candidate frame is constituted, ascending order arrangement is carried out along X-axis according to the position of multiple foundation characteristics in the picture, then ascending order direction and its two nearest feature are only matched to the foundation characteristic in sequence, separately constitute candidate frame, then classified using the image classification model that machine learning or the training of deep learning algorithm obtain to multiple candidate frames, for same N number of foundation characteristic, algorithm complexity of the invention is up to 2N-3, its algorithm complexity is less than ergodic algorithm complexity (N-1) two-by-two always2, therefore method complexity of the invention is lower, required detection time is shorter, to substantially increase the detection efficiency of target object.

Description

A kind of target object visible detection method for symmetrical feature
Technical field
The present invention relates to a kind of target object visible detection method, specifically a kind of object for symmetrical feature is stereoscopic Feel detection method.
Background technique
Target detection is one of three big tasks of computer vision field, i.e., shoots ambient enviroment and generate image, then Shooting image is identified, obtains in shooting image with the presence or absence of the target pattern being preset in computer, is extracted Come, completes the detection identification process to target pattern in environment.The algorithm of target detection of mainstream is mainly based upon depth at present Model is practised, one of major class is exactly two-stage detection algorithm, i.e., test problems is divided into two stages, clapped first It takes the photograph in image and generates multiple candidate regions, then classify to each candidate region, finally each candidate region identify Object detection results out, Typical Representative such as R-CNN serial algorithm etc..
For two-stage detection algorithm, the generation of candidate frame is calculated by traditional machine vision at present Method carries out pretreatment to image and obtains foundation characteristic according to the detection some more basic features being readily detected of object.But When detection has the target object of symmetrical basic feature, after often being traversed to each foundation characteristic detected, two Then two combination producing candidate frames are again classified to candidate frame using the method for machine learning or deep learning to it.Assuming that one Width shooting image has N number of foundation characteristic, then the complexity of matching algorithm is (N-1) two-by-two2.It will lead to algorithm complexity in this way Higher, time cost is too high, and final object detection results are also easy to be interfered, such as in the picture for two phases Foundation characteristic when farther out or especially close is It is not necessary to which carrying out matching generates candidate frame.Therefore it is existing this Kind mode algorithm complexity is high, and required detection time is longer, therefore the efficiency of target detection is lower.
Summary of the invention
In view of the above existing problems in the prior art, the present invention provides a kind of target object vision inspections for symmetrical feature Survey method has complexity lower, and required detection time is shorter, can improve the detection efficiency of target object.
To achieve the goals above, the technical solution adopted by the present invention is that: a kind of object for symmetrical feature is stereoscopic Feel detection method, specific steps are as follows:
(1) the image input computer of multiple target objects is acquired, then hand labeled goes out to acquire target object in image And multiple distracters, it is fabricated to data set, the class wherein determining target object is positive, remaining class that is negative;Computer uses known depth Degree learning method or known machine learning method determine image classification model, will instruct after data set input picture disaggregated model Practice, the image classification model after finally obtaining training, and is saved;
(2) image is shot to the environment of required detection, by shooting image using known machine visual processing method to image It is pre-processed, obtains multiple symmetrical basic features in shooting image, and carry out using XY axis coordinate system to each foundation characteristic Then mark obtains foundation characteristic sequence L0;
(3) ascending order arrangement is done according to its X axis coordinate position in the picture to each foundation characteristic, obtains foundation characteristic sequence Arrange L1;
(4) in foundation characteristic sequence L1, from low to high to each foundation characteristic, the foundation characteristic is found out along X axis coordinate The nearest foundation characteristic in two, ascending order direction, and the foundation characteristic and two nearest foundation characteristics is made to separately constitute two candidates Frame obtains one group of candidate frame, and so on, the candidate frame group of each foundation characteristic is obtained, wherein in last group of candidate frame group For a candidate frame, two candidate frames in remaining every group, lower according to X axis coordinate sequence is H0, another is H1;By each group Candidate frame is saved as sequence L2;
(5) successively candidate frame H0 in each group in sequence L2 is judged using preset candidate frame threshold range,
If the candidate frame H0 detected in current group is not in threshold range, candidate frame H0 is rejected, in the group Candidate frame H1 repeat above-mentioned deterministic process and reject candidate frame H1 if being not in threshold range, continue next group of candidate The detection of frame;If carrying out classification judgement in threshold range to candidate frame H1 using image classification model, completing classification Continue the detection of next group of candidate frame afterwards;
If the candidate frame H0 detected in current group is in threshold range, using image classification obtained in step (1) Model carries out classification judgement to this group of candidate frame H0,
The class if classification results are positive directly determines candidate frame H0 and is positive class, rejects candidate frame H1 in the group, then carry out The detection of next group of candidate frame;
The class if classification results are negative repeats above-mentioned deterministic process to the candidate frame H1 in the group, if being not at threshold value model In enclosing, then candidate frame H1 is rejected, the detection of next group of candidate frame is continued;If using image classification in threshold range Model carries out classification judgement to candidate frame H1, completes the detection for continuing next group of candidate frame after classifying;
Until all candidate frame groups complete detect and classify after, using classification results be positive class candidate frame as sequence L3 It saves;
(6) each candidate frame is to shoot all target objects to be detected in image in sequence L3.
Further, image is pre-processed in the step (2), including gray proces, binary conversion treatment, edge Reason, shape and color detection.
Further, known deep learning method is CNN deep learning method in the step (1);Known machine study side Method is SVM machine learning method or KNN machine learning method.
Further, preset candidate frame threshold range includes two for setting composition candidate frame in the step (5) Foundation characteristic respectively presss from both sides between the threshold range or setting candidate frame adjacent long edges and short side of the ratio between long side of minimum area-encasing rectangle The threshold range at angle.
Compared with prior art, the present invention combines mode using machine learning image classification and two-stage detection, right Shooting, there is the image of target object, which to carry out processing, obtains multiple foundation characteristics, then constitutes candidate frame, special according to multiple bases Sign position in the picture carries out ascending order arrangement along X-axis, then to the foundation characteristic in sequence only match ascending order direction and its most Two close features, separately constitute candidate frame, the image point then obtained using machine learning or the training of deep learning algorithm Class model classifies to multiple candidate frames, and for same N number of foundation characteristic, algorithm complexity of the invention is up to 2N-3, Its algorithm complexity is less than ergodic algorithm complexity (N-1) two-by-two always2, therefore method complexity of the invention is lower, institute The detection time needed is shorter, to substantially increase the detection efficiency of target object.
Detailed description of the invention
Fig. 1 is the target object schematic diagram in the embodiment of the present invention with symmetrical basic feature;
Fig. 2 is that the candidate frame of one of shooting image of the invention generates schematic diagram;
Fig. 3 is that the candidate frame of another shooting image of the invention generates schematic diagram.
Specific embodiment
The present invention will be further described below.
Embodiment:
As shown, of the invention: specific steps are as follows:
(1) the image input computer of multiple target objects is acquired, then hand labeled goes out to acquire target object in image And multiple distracters, it is fabricated to data set, the class wherein determining target object (as shown in Figure 1) is positive, remaining class that is negative;Computer Image classification model is determined using known deep learning method or known machine learning method, by data set input picture classification mould It is trained, the image classification model after finally obtaining training, and is saved after type;
(2) image (as shown in Figures 2 and 3) is shot to the environment of required detection, shooting image is regarded using known machine Feeling that processing method carries out pretreatment to image includes gray proces, binary conversion treatment, edge processing, shape and color detection, is obtained Four symmetrical basic features into shooting image, and each foundation characteristic is labeled using XY axis coordinate system, along X-axis ascending order Direction is respectively T0, T1, T2, T3, then obtains foundation characteristic sequence L0;
(3) ascending order arrangement is done according to its X axis coordinate position in the picture to each foundation characteristic, obtains foundation characteristic sequence Arrange L1;
(4) in foundation characteristic sequence L1, from low to high to each foundation characteristic, the foundation characteristic is found out along X axis coordinate The nearest foundation characteristic in two, ascending order direction, and the foundation characteristic and two nearest foundation characteristics is made to separately constitute two candidates Frame obtains one group of candidate frame, and so on, obtain candidate frame group (the i.e. foundation characteristic T0 and it is nearest of each foundation characteristic Two foundation characteristics T1 and T2 separately constitute candidate frame CDJI and candidate frame CDFE, form one group of candidate frame;Foundation characteristic T1 with Its nearest two foundation characteristics T2 and T3 separately constitute candidate frame EFJI and candidate frame EFGH, form one group of candidate frame), wherein For a candidate frame, (i.e. foundation characteristic T2 only has the last one foundation characteristic on ascending order direction in last group of candidate frame group T3 forms candidate frame IJGH), two candidate frames in remaining every group, lower according to X axis coordinate sequence is H0, another is H1 (candidate frame CDJI is less than candidate frame CDFE in its respective maximum X axis coordinate of such as candidate frame CDJI and candidate frame CDFE, it is determined that Candidate frame CDJI is the candidate frame H0 in group, and candidate frame CDFE is the candidate frame H1 in group);Using each group candidate frame as sequence L2 It saves;
(5) using the threshold for presetting respective the ratio between the long side of minimum area-encasing rectangle of two foundation characteristics of composition candidate frame Being worth range is [0.5,1.5], and set the threshold range of angle between candidate frame adjacent long edges and short side as [80 °, 100 °] according to It is secondary that candidate frame H0 in each group in sequence L2 is judged:
As shown in Fig. 2, (i.e. candidate frame CDEF) is detected to the candidate frame in first group being H0, through detecting candidate frame CDEF is not between candidate frame adjacent long edges and short side in the threshold range of angle, then is rejected the candidate frame, then to this Candidate frame H1 (i.e. candidate frame CDIJ) in group is detected, the threshold value in angle between candidate frame adjacent long edges and short side In range, classified using image classification model obtained in step (1) to candidate frame CDJI, classification results are candidate frame CDJI is positive class (comprising target object i.e. in candidate frame), then carries out the detection of next group of candidate frame;
Second group of candidate frame H0 is detected (i.e. candidate frame EFJI) and is repeated the above process, and is detected candidate frame EFJI and not located Between candidate frame adjacent long edges and short side in the threshold range of angle, then the candidate frame is rejected, then to the time in the group It selects frame H1 (i.e. candidate frame EFGH) to be detected, is between candidate frame adjacent long edges and short side in the threshold range of angle, Classified using image classification model obtained in step (1) to candidate frame EFGH, classification results are that candidate frame EFGH is positive Then class carries out the detection of next group of candidate frame;
The last one candidate frame JIGH is detected, through detect candidate frame JIGH be not at candidate frame adjacent long edges with it is short Between side in the threshold range of angle, then the candidate frame is rejected;Finally classification results are positive candidate frame (the i.e. candidate frame of class CDJI and candidate frame EFGH) it is saved as sequence L3;
As shown in figure 3, (i.e. candidate frame CDJI) is detected to the candidate frame in first group being H0, through detecting candidate frame CDJI is between candidate frame adjacent long edges and short side in the threshold range of angle, using image classification obtained in step (1) Model classifies to candidate frame CDJI, and classification results are that candidate frame CDJI is positive class (comprising target object i.e. in candidate frame), Since candidate frame CDJI is candidate frame H0 in group, the candidate frame H1 (i.e. candidate frame CDFE) in the group is directly removed, Then the detection of next group of candidate frame is carried out;
Second group of candidate frame H0 is detected (i.e. candidate frame EFJI) and is repeated the above process, and show that candidate frame EFJI is negative Class, candidate frame EFGH are candidate frame H0 in group, the candidate frame H1 (i.e. candidate frame IJGH) due to its class that is negative, in the group Above-mentioned detection and assorting process are repeated, show that candidate frame IJGH is positive class;
Then detection and assorting process are carried out to the last one candidate frame EFGH, show that candidate frame EFGH is positive class;Finally By classification results be positive class candidate frame (i.e. candidate frame CDIJ, candidate frame JIGH and candidate frame EFGH) as sequence L3 save;
(6) each candidate frame is to shoot all target objects to be detected in image in sequence L3.
Further, known deep learning method is CNN deep learning method in the step (1);Known machine study side Method is SVM machine learning method or KNN machine learning method.

Claims (4)

1. a kind of target object visible detection method for symmetrical feature, which is characterized in that specific steps are as follows:
(1) the image input computer of multiple target objects is acquired, then hand labeled goes out to acquire target object and more in image A distracter, is fabricated to data set, the class wherein determining target object is positive, remaining class that is negative;Computer uses known depth Learning method or known machine learning method determine image classification model, will be trained after data set input picture disaggregated model, Image classification model after finally obtaining training, and saved;
(2) image is shot to the environment of required detection, shooting image carries out image using known machine visual processing method Pretreatment is obtained multiple symmetrical basic features in shooting image, and is labeled to each foundation characteristic using XY axis coordinate system, Then foundation characteristic sequence L0 is obtained;
(3) ascending order arrangement is done according to its X axis coordinate position in the picture to each foundation characteristic, obtains foundation characteristic sequence L1;
(4) in foundation characteristic sequence L1, from low to high to each foundation characteristic, the foundation characteristic is found out along X axis coordinate ascending order The nearest foundation characteristic in two, direction, and the foundation characteristic and two nearest foundation characteristics is made to separately constitute two candidate frames, One group of candidate frame is obtained, and so on, the candidate frame group of each foundation characteristic is obtained, is wherein one in last group of candidate frame group A candidate frame, two candidate frames in remaining every group, lower according to X axis coordinate sequence is H0, another is H1;By each group candidate Frame is saved as sequence L2;
(5) successively candidate frame H0 in each group in sequence L2 is judged using preset candidate frame threshold range,
If the candidate frame H0 detected in current group is not in threshold range, candidate frame H0 is rejected, to the time in the group It selects frame H1 to repeat above-mentioned deterministic process and rejects candidate frame H1 if being not in threshold range, continue next group of candidate frame Detection;If carry out classification judgement in threshold range to candidate frame H1 using image classification model, it is subsequent to complete classification Continue the detection of next group of candidate frame;
If the candidate frame H0 detected in current group is in threshold range, using image classification model obtained in step (1) Classification judgement is carried out to this group of candidate frame H0,
The class if classification results are positive directly determines candidate frame H0 and is positive class, rejects candidate frame H1 in the group, then carry out next The detection of group candidate frame;
The class if classification results are negative repeats above-mentioned deterministic process to the candidate frame H1 in the group, if being not in threshold range, Candidate frame H1 is then rejected, the detection of next group of candidate frame is continued;If using image classification model pair in threshold range Candidate frame H1 carries out classification judgement, completes the detection for continuing next group of candidate frame after classifying;
Until all candidate frame groups complete detect and classify after, using classification results be positive class candidate frame as sequence L3 guarantor It deposits;
(6) each candidate frame is to shoot all target objects to be detected in image in sequence L3.
2. a kind of target object visible detection method for symmetrical feature according to claim 1, which is characterized in that institute It states and image is pre-processed in step (2), including gray proces, binary conversion treatment, edge processing, shape and color detection.
3. a kind of target object visible detection method for symmetrical feature according to claim 1, which is characterized in that institute Stating known deep learning method in step (1) is CNN deep learning method;Known machine learning method is the machine learning side SVM Method or KNN machine learning method.
4. a kind of target object visible detection method for symmetrical feature according to claim 1, which is characterized in that institute State the respective most parcel of two foundation characteristics that preset candidate frame threshold range in step (5) includes setting composition candidate frame Enclose the threshold range of angle between the threshold range or setting candidate frame adjacent long edges and short side of the ratio between long side of rectangle.
CN201811288693.3A 2018-10-31 2018-10-31 Target object visual detection method for symmetric characteristics Active CN109492685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811288693.3A CN109492685B (en) 2018-10-31 2018-10-31 Target object visual detection method for symmetric characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811288693.3A CN109492685B (en) 2018-10-31 2018-10-31 Target object visual detection method for symmetric characteristics

Publications (2)

Publication Number Publication Date
CN109492685A true CN109492685A (en) 2019-03-19
CN109492685B CN109492685B (en) 2022-05-24

Family

ID=65693452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811288693.3A Active CN109492685B (en) 2018-10-31 2018-10-31 Target object visual detection method for symmetric characteristics

Country Status (1)

Country Link
CN (1) CN109492685B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705579A (en) * 2019-04-15 2020-01-17 中国石油大学(华东) Complex multi-target integrated switch control panel state verification method based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913082A (en) * 2016-04-08 2016-08-31 北京邦焜威讯网络技术有限公司 Method and system for classifying objects in image
CN106127161A (en) * 2016-06-29 2016-11-16 深圳市格视智能科技有限公司 Fast target detection method based on cascade multilayer detector
CN106991408A (en) * 2017-04-14 2017-07-28 电子科技大学 The generation method and method for detecting human face of a kind of candidate frame generation network
CN107316058A (en) * 2017-06-15 2017-11-03 国家新闻出版广电总局广播科学研究院 Improve the method for target detection performance by improving target classification and positional accuracy

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913082A (en) * 2016-04-08 2016-08-31 北京邦焜威讯网络技术有限公司 Method and system for classifying objects in image
CN106127161A (en) * 2016-06-29 2016-11-16 深圳市格视智能科技有限公司 Fast target detection method based on cascade multilayer detector
CN106991408A (en) * 2017-04-14 2017-07-28 电子科技大学 The generation method and method for detecting human face of a kind of candidate frame generation network
CN107316058A (en) * 2017-06-15 2017-11-03 国家新闻出版广电总局广播科学研究院 Improve the method for target detection performance by improving target classification and positional accuracy

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
覃剑等: "采用在线高斯模型的行人检测候选框快速生成方法", 《光学学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705579A (en) * 2019-04-15 2020-01-17 中国石油大学(华东) Complex multi-target integrated switch control panel state verification method based on deep learning
CN110705579B (en) * 2019-04-15 2023-05-23 中国石油大学(华东) Deep learning-based state verification method for complex multi-target integrated switch control board

Also Published As

Publication number Publication date
CN109492685B (en) 2022-05-24

Similar Documents

Publication Publication Date Title
CN111179251B (en) Defect detection system and method based on twin neural network and by utilizing template comparison
CN108898610B (en) Object contour extraction method based on mask-RCNN
CN102609686B (en) Pedestrian detection method
CN103971102B (en) Static Gesture Recognition Method Based on Finger Contour and Decision Tree
CN109543606A (en) A kind of face identification method that attention mechanism is added
CN106325485B (en) A kind of gestures detection recognition methods and system
CN111652292B (en) Similar object real-time detection method and system based on NCS and MS
CN106096602A (en) Chinese license plate recognition method based on convolutional neural network
CN105608441B (en) Vehicle type recognition method and system
CN109002755B (en) Age estimation model construction method and estimation method based on face image
CN111402316B (en) Rapid detection method for ellipses in image based on anti-fake links
Karis et al. Local Binary Pattern (LBP) with application to variant object detection: A survey and method
CN110334703B (en) Ship detection and identification method in day and night image
Kim et al. Autonomous vehicle detection system using visible and infrared camera
CN110706235A (en) Far infrared pedestrian detection method based on two-stage cascade segmentation
CN106022223A (en) High-dimensional local-binary-pattern face identification algorithm and system
CN106874825A (en) The training method of Face datection, detection method and device
CN111401449A (en) Image matching method based on machine vision
CN113159045A (en) Verification code identification method combining image preprocessing and convolutional neural network
CN114863464B (en) Second-order identification method for PID drawing picture information
CN112329656A (en) Feature extraction method for human action key frame in video stream
CN111191535A (en) Pedestrian detection model construction method based on deep learning and pedestrian detection method
CN110008899A (en) A kind of visible remote sensing image candidate target extracts and classification method
CN112347967B (en) Pedestrian detection method fusing motion information in complex scene
CN109492685A (en) A kind of target object visible detection method for symmetrical feature

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220426

Address after: 100013 No. 5 Youth Road, Chaoyang District, Beijing, Hepingli

Applicant after: CHINA COAL Research Institute

Applicant after: China University of Mining and Technology

Address before: No. 1, Quanshan District, Xuzhou, Jiangsu, Jiangsu

Applicant before: CHINA University OF MINING AND TECHNOLOGY

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant