CN109800757A - A kind of video text method for tracing based on layout constraint - Google Patents

A kind of video text method for tracing based on layout constraint Download PDF

Info

Publication number
CN109800757A
CN109800757A CN201910006843.5A CN201910006843A CN109800757A CN 109800757 A CN109800757 A CN 109800757A CN 201910006843 A CN201910006843 A CN 201910006843A CN 109800757 A CN109800757 A CN 109800757A
Authority
CN
China
Prior art keywords
text
frame
track
formula
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910006843.5A
Other languages
Chinese (zh)
Other versions
CN109800757B (en
Inventor
冯晓毅
王西汉
蒋晓悦
夏召强
彭进业
谢红梅
李会方
何贵青
宋真东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201910006843.5A priority Critical patent/CN109800757B/en
Publication of CN109800757A publication Critical patent/CN109800757A/en
Application granted granted Critical
Publication of CN109800757B publication Critical patent/CN109800757B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

It is tracked to solve the mobile lower more texts of significantly camera, the invention proposes a kind of video text method for tracing based on layout constraint.The input of this method is the text detection of video and video frame as a result, output is the trace information after text tracking.Firstly, carrying out the initialization of text track by the testing result of initial video frame, then the testing result of the text track of previous frame and present frame is sent into method for tracing of the invention to the update for carrying out text track.The core that text track updates is that the character area for detecting present frame corresponds to existing text track, which can be considered as a kind of Data Matching problem.The present invention designs a new Data Matching cost function in response to this problem, obtains best matching result by solution cost function.By repeating track renewal process until processing terminate for video, text tracking result is finally obtained.The present invention introduces layout constraint in Data Matching cost function, carries out text tracking by the overall appearance structure between character area, it is possible to prevente effectively from because camera Large Amplitude Motion leads to error tracking as a result, having preferably tracking effect.

Description

A kind of video text method for tracing based on layout constraint
Technical field
The present invention relates to the text method for tracing in a kind of field of video processing, especially natural scene shooting video.
Background technique
Text in video includes high-layer semantic information, usually closely related with video content.Therefore, video text mentions It takes and being played an important role in application of many based on Media Analysis, such as is blind person's auxiliary system, driving assistance system, autonomous Mobile robot etc..The extraction of video text generally includes text detection and text tracking, and text detection is completed to video frame figure The positioning of text target as in, text tracking complete that identical character area is mapped in consecutive image sequence.Video In text usually there is time redundancy characteristic, i.e. text can be deposited disappear over time in video.Using the characteristic, lead to Crossing text tracer technique can be improved the stability and precision of video text detection.In addition, text tracking can also be video point Analysis provides other relevant informations, such as: the time point of text appearing and subsiding, text are in certain time in video time sequence Motion profile etc..Some real time processing systems can also improve system using the time redundancy characteristic of text in video and handle Speed.It can be seen that text tracer technique is played an important role in based on video analysis application.
Existing video text method for tracing can not handle the tracing problem of more texts when camera significantly moves well. Because text usually will not individually occur, but occur with packed form in natural scene.These texts often have identical Size, length-width ratio and color characteristic, and the feature that most tracing algorithms extracts can not distinguish these texts well, can make Text can not be correctly tracked at the matching of mistake.As camera is when significantly moving, such case can be more serious.
Based on the above issues, the invention proposes a kind of video text method for tracing based on layout constraint is big to solve More texts tracking under amplitude camera is mobile.
Summary of the invention
In order to solve more text tracing problems under significantly camera is mobile, the invention proposes one kind to be based on layout constraint Video text method for tracing.Method flow of the invention is as shown in Fig. 1.The input of this method is video and video single frames figure The text detection of picture is as a result, output is the track of each character area in video, i.e., spatial information (position in each frame Coordinate and width are high).Firstly, carrying out the initialization of text track by the word area detection result of initial video frame, then will The text track of previous frame and the word area detection result of present frame are sent into progress text track in method for tracing of the invention Update, repeat the process until video processing complete, finally obtain text tracking result.Text track update core be by The character area that present frame detects corresponds to existing text track, which can be considered as a kind of Data Matching problem.This Invention designs a new Data Matching cost function in response to this problem, obtains best match knot by solution cost function Fruit.The present invention introduces layout constraint in Data Matching cost function, carries out text by the overall appearance structure between character area Word tracking, it is possible to prevente effectively from because camera Large Amplitude Motion leads to error tracking as a result, having preferably tracking effect.This hair Bright detail is as follows.
1, design data matching cost function
The character area that text track is included in present frame is defined first.I-th of text in setting video t frame text track The state in block domain isWhereinFor the abscissa and vertical seat of character area central point Mark,For the region horizontal and vertical movement speed in the picture,For the width and height of character area,For (present invention extracts RGB color histogram to the character area color characteristic, and there are 16 characteristic areas in each channel, and three channels are total 48).The state of character area is set as in all text tracks of t frameWherein i ∈ Nt, NtIt indicates in t frame The number of character area.
For every two character area, need to establish the correlation of its position and speed, this correlation can regard For a kind of structural constraint, structural constraint can be by shown in formula (1):
WhereinThe structural constraint for indicating character area i and j, can be expressed as all constraints of iThen all character areas are constrained in t frame
Tracking task is matched to word area detection result on existing text track, if p-th of text in t frame Area detection result information is For the centre coordinate of word area detection result,For the width and height of word area detection result.Then the collection of all word area detection results is combined into t frameWherein p ∈ Mt, MtFor the number for detecting character area.
Two-value symbol a is used in the present inventioni,pIt indicates text track and word area detection result match condition, works as text Track i matches with word area detection result p, then ai,p=1, otherwise ai,p=0.For in t-1 frame text track with Word area detection in t frame is as a result, its Data Matching situation can be described by formula (2):
A=argminC (St-1,Rt-1,Dt) (2)
Wherein A={ ai,p|i∈Nt-1,p∈Mt, a text track is at most matched to a character area in this method Testing result, C (St-1,Rt-1,Dr) indicate that text track and all possible pairing of word area detection result are gathered.And it is best Matching result be set minimum value argminC (St-1,Rt-1,Dt)。
In successive frame, too big variation will not occur for mutual distance between the text with same background.When camera motion, Text and other texts should keep similar appearance.In text tracking, this method considers the similar of text in consecutive frame simultaneously Property and associated system other text appearances similitude, similitude, that is, cloth of appearance is laid out in this consecutive frame around text Office's constraint.Cost function C (S based on layout constraintt-1,Rt-1,Dt) as shown in formula (3):
WhereinWithIndicate that the frame text track t-1 and t frame detect character area Cost value is distinguished, using area size ratio and Duplication calculate cost in the present invention, as shown in formula (4) and (5):
In formulaWithRespectively indicate t-1 frame text track i and t frame word area detection As a result the width and height of p,Indicate t-1 frame text track i and t frame word area detection result The minimum extraneous area for surrounding frame overlapping of p region,It indicates to merge area.
In formula (3)Indicate t frame word area detection result p with the structural constraint of t-1 frameEstimation rangeExternal appearance characteristic and the similitude of corresponding t-1 frame text track j external appearance characteristic, calculation formula such as (6) (7) shown in:
H in formulab(s) RGB color normalization histogram feature is indicated, B is feature sum, and b is index,Include The center point coordinate of estimation range position and the width of estimation range are high.
2, cost function optimization and solution
It is calculated to simplify, the present invention uses the matching of formula (8) and formula (9) constrained trajectory and testing result, when not When meeting condition, it is considered as ai,p=0.Formula (8) and (9) are as follows:
S in formulaaAnd sbThe state between two character areas is indicated, when interregional distance and excessive relative velocity, then it is assumed that The two not can be carried out matching.τ value is 10 in the present invention.
Finally, all pairing character area cost values can be calculated according to formula (2), and obtain a Nt-1×Mt's Similarity matrix.Use " the Kuhn H W.The Hungarian method for the assignment problem of document 1 [J] .Naval Research Logistics, 1955, the method that (1-2): 83-97. " is proposed can calculate best match knot Fruit.It as a result is the matrix of a 2 × Q, which is the matching of text track index number and word area detection result call number Matrix, wherein Q is matching number.Using the matching matrix, existing text track new space letter in the current frame can be updated Breath (position coordinates and width are high), i.e. the character area tracking of completion present frame.Such as: t-1 frame has 3 articles of texts tracked Track, t frame have 3 character areas detected, and the matching matrix obtained after inventive algorithm calculates is such as shown in (10):
The matrix first row indicates text track index number, and secondary series indicates word area detection result call number.Its table Show the 1st article of text track it is corresponding be the 2nd character area detected, the 2nd article of text track be corresponding to be the 1st and detects Character area, corresponding the 3rd article of text track is the 3rd character area detected.According to the matching matrix, by t frame 3 character area corresponding coordinates detected and the high information of width replace the spatial information in corresponding text track, complete t frame The update of text track.
3, beneficial effect
The present invention can accurately track the text track in video when extensive camera is mobile.The present invention uses text Tracking field well known data library Minetto is tested.Minetto database includes 5 sections of scene text videos, video frame point altogether Resolution is 640 × 480.In test phase, the text detection result of video and every frame video image is input to the present invention and is chased after In track algorithm, algorithm exports the track of each character area in video, i.e. spatial information (position of the character area in every frame Coordinate and width are high).Become by calculating multi-target tracking accuracy rate (MOTP), multi-target tracking accuracy (MOTA) and track index Change number (IDS) three known evaluation indexes to measure the validity of inventive algorithm.With document 2 " Pei W Y, Yang C, Meng L Y,et al.Scene Video Text Tracking With Graph Matching[J].IEEE Access, Method is compared in 2018,6:19419-19426. ", and the video text tracking proposed by the present invention based on layout constraint exists Performance greatly improves on Minetto database, specifically: MOTP index improves 6%, MOTA and improves 19%, IDS promotion One times.
Detailed description of the invention
Fig. 1 is the video text method for tracing flow chart based on layout constraint.
Specific embodiment
Referring to Fig.1, specific step is as follows for the video text method for tracing proposed by the present invention based on layout constraint:
Step 1: input video and text detection result
The present invention is built upon in video text testing result.Text detection can be divided on line to be detected under detection and line.It is right In detecting on line, input video first then frame by frame or frame-skipping detects text will test in the result input present invention and carry out text Then tracking carries out next frame text detection again, repeat the process until video processing is completed.It is defeated first for being detected under line Enter video, then carries out text detection until video processing completion, the detection structure of video and each frame is finally input to this Text tracking is carried out in invention.Method for tracing proposed by the present invention can be applied on line simultaneously and detect under detection and line.
Step 2: text Track Initiation
Track Initiation is carried out to the testing result of video first frame, the character area that each is detected is considered as one New text track is simultaneously indexed number, then calculates the state S of all character areast, speed (u in statet,vt) just Beginning turns to (0,0), t=1.And the structural constraint R between character area two-by-two is calculated according to formula (1)t, t=1.Simultaneously according to about Beam formula (8) and (9) remove undesirable structural constraint, and record remaining structure constrains R1With text track state S1
Step 3: text track updates
The text track more new stage passes through existing text track in the word area detection result and t-1 frame of t frame It is matched, the word area detection result additional space information (position coordinates and width are high) that will match to replaces corresponding text rail The spatial information of character area in mark.Stage input is the text track state S of t-1 framet-1, structural constraint Rt-1With t Frame word area detection result Dt, export as updated text trajectory range information.
Step 3.1: Data Matching
The word area detection result of text track and t frame to t-1 frame is matched, and forms N altogethert-1×Mt A combinations of pairs.Then a N is obtained using the cost value that formula (3) calculate all pairingst-1×MtSimilarity matrix.Make Judged with constraint is carried out first with formula (8) before formula (3), when being unsatisfactory for condition, the calculating of formula (3) is skipped, by this Pairing cost value is set as 999.Use " the Kuhn H W.The Hungarian method for the assignment of document 1 Problem [J] .Naval Research Logistics, 1955, the method that (1-2): 83-97. " is proposed can calculate most Good matching result.It as a result is the matrix of a 2 × Q, which is that text track index number and word area detection result index Number matching matrix, wherein Q be matching number.
Step 3.1: update is matched to track
If text track matches with current character area detection result, " the Links I K F.An of document 3 is utilized The Kalman filtering algorithm proposed in Introduction to the Kalman Filter [J] .1995. " uses character area Testing resultTo in track stateInto Row updates, while updating the normalization color histogram of character areaObtain new state St
Step 3.2: update does not match track
Existing text detection algorithm often will appear detection leakage phenomenon, and text track is caused to fail to be matched to character area inspection Survey result.Updated text track state S is utilized at this timetWith the structural constraint R of t-1 framet-1To the text not being matched to Predicted that formula is as follows using formula (11) in track:
Wherein NrFor the number for being matched to text track, (x, y) is character area regional center coordinate, and (Δ x, Δ y) are Centre coordinate distance difference in structural constraint.The regional center coordinate of prediction is replaced to the old seat not being matched in text track Mark, and replacement number is recorded, when replacing number greater than 3, it is considered as text track disappearance, the track is deleted from text track Information.
Step 3.3: initializing new track
If t frame word area detection result p fails and any text path matching, then it is assumed that new text rail occur Mark establishes new track stateAnd the new track state is added in existing track.
Step 3.4: the constraint of more new literacy track configuration
The structural constraint between text track two-by-two is calculated according to formula (1).It is removed simultaneously according to constraint formulations (8) and (9) Undesirable structural constraint, record remaining structure constrain Rt
Step 3.5: output updates track
The updated text rail spatial information of t frame (position coordinates and width are high) is exported, records and updates all tracks and deposit Number living.
Step 4: output character trace information
Step 3 is repeated until video processing is completed, according to the time redundancy characteristic of text in video, i.e., text is in video In often exist one section after can just disappear.The present invention filters non-legible region using the time redundancy characteristic, when depositing for track When number living is less than or equal to 15 frame, judges the non-legible region in this track, delete the text trace information.By filtration treatment, most Remaining text trace information is exported eventually.

Claims (1)

1. the video text method for tracing based on layout constraint, core essentially consists in two parts: Data Matching cost function, Cost function optimization and solution.
(1) Data Matching cost function:
The character area that text track is included in present frame is defined first.I-th of literal field in setting video t frame text track The state in domain isWhereinFor the abscissa and ordinate of character area central point,For the region horizontal and vertical movement speed in the picture,For the width and height of character area,For this article (present invention extracts RGB color histogram to word region color feature, and there are 16 characteristic areas in each channel, three channels totally 48 It is a).The state of character area is set as in all text tracks of t frameWherein i ∈ Nt, NtIndicate t frame Chinese The number in block domain.
For every two character area, need to establish the correlation of its position and speed, this correlation can be considered as one Kind structural constraint, structural constraint can be by shown in formula (1):
WhereinThe structural constraint for indicating character area i and j, can be expressed as all constraints of iThen all character areas are constrained in t frame
Tracking task is matched to word area detection result on existing text track, if p-th of character area in t frame Testing result information is For the centre coordinate of word area detection result,For the width and height of word area detection result.Then the collection of all word area detection results is combined into t frameWherein p ∈ Mt, MtFor the number for detecting character area.
Two-value symbol a is used in the present inventioni,pText track and word area detection result match condition are indicated, as text track i Match with word area detection result p, then ai,p=1, otherwise ai,p=0.For in t-1 frame text track and t frame In word area detection as a result, its Data Matching situation can be described by formula (2):
A=argminC (St-1,Rt-1,Dt) (2)
Wherein A={ ai,p|i∈Nt-1,p∈Mt, a text track is at most matched to a word area detection knot in this method Fruit, C (St-1,Rt-1,Dt) indicate that text track and all possible pairing of word area detection result are gathered.And optimal matching It as a result is set minimum value argminC (St-1,Rt-1,Dt)。
In successive frame, too big variation will not occur for mutual distance between the text with same background.When camera motion, text Similar appearance should be kept with other texts.In text tracking, this method consider in consecutive frame simultaneously the similitude of text with The similitude of other text appearances of associated system, the similitude for being laid out appearance in this consecutive frame around text are laid out about Beam.Cost function C (S based on layout constraintt-1,Rt-1,Dt) as shown in formula (3):
WhereinWithIndicate that the frame text track t-1 and t frame detect the difference of character area Cost value, using area size ratio and Duplication calculate cost in the present invention, as shown in formula (4) and (5):
In formulaWithRespectively indicate t-1 frame text track i and t frame word area detection result p Width and height,Indicate t-1 frame text track i and t frame word area detection result p region The minimum extraneous area for surrounding frame overlapping,It indicates to merge area.
In formula (3)Indicate t frame word area detection result p with the structural constraint of t-1 frameIn advance Survey regionExternal appearance characteristic and the similitude of corresponding t-1 frame text track j external appearance characteristic, calculation formula such as (6) and (7) It is shown:
H in formulab(s) RGB color normalization histogram feature is indicated, B is feature sum, and b is index,Include Target area The center point coordinate of domain position and the width of estimation range are high.
(2) cost function optimization and solution:
It is calculated to simplify, the present invention uses the matching of formula (8) and formula (9) constrained trajectory and testing result, when being unsatisfactory for When condition, it is considered as ai,p=0.Formula (8) and (9) are as follows:
S in formulaaAnd sbThe state between two character areas is indicated, when interregional distance and excessive relative velocity, then it is assumed that the two Matching is not can be carried out.τ value is 10 in the present invention.
Finally, all pairing character area cost values can be calculated according to formula (2), and obtain a Nt-1×MtSimilarity Matrix.Use " the Kuhn H W.The Hungarian method for the assignment problem [J] of document 1 .Naval Research Logistics, 1955, the method that (1-2): 83-97. " is proposed can calculate best matching result. It as a result is the matrix of a 2 × Q, which is the matching square of text track index number and word area detection result call number Battle array, wherein Q is matching number.Using the matching matrix, existing text track new spatial information in the current frame can be updated (position coordinates and width are high), i.e. the character area tracking of completion present frame.
CN201910006843.5A 2019-01-04 2019-01-04 Video character tracking method based on layout constraint Active CN109800757B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910006843.5A CN109800757B (en) 2019-01-04 2019-01-04 Video character tracking method based on layout constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910006843.5A CN109800757B (en) 2019-01-04 2019-01-04 Video character tracking method based on layout constraint

Publications (2)

Publication Number Publication Date
CN109800757A true CN109800757A (en) 2019-05-24
CN109800757B CN109800757B (en) 2022-04-19

Family

ID=66558550

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910006843.5A Active CN109800757B (en) 2019-01-04 2019-01-04 Video character tracking method based on layout constraint

Country Status (1)

Country Link
CN (1) CN109800757B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021164479A1 (en) * 2020-02-21 2021-08-26 华为技术有限公司 Video text tracking method and electronic device
CN114463376A (en) * 2021-12-24 2022-05-10 北京达佳互联信息技术有限公司 Video character tracking method and device, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101276416A (en) * 2008-03-10 2008-10-01 北京航空航天大学 Text tracking and multi-frame reinforcing method in video
TW201039149A (en) * 2009-04-17 2010-11-01 Yu-Chieh Wu Robust algorithms for video text information extraction and question-answer retrieval
CN104244073A (en) * 2014-09-26 2014-12-24 北京大学 Automatic detecting and recognizing method of scroll captions in videos
WO2015165524A1 (en) * 2014-04-30 2015-11-05 Longsand Limited Extracting text from video
CN107545210A (en) * 2016-06-27 2018-01-05 北京新岸线网络技术有限公司 A kind of method of video text extraction
CN108052941A (en) * 2017-12-19 2018-05-18 北京奇艺世纪科技有限公司 A kind of news caption tracking and device
CN108229476A (en) * 2018-01-08 2018-06-29 北京奇艺世纪科技有限公司 Title area detection method and system
CN108256493A (en) * 2018-01-26 2018-07-06 中国电子科技集团公司第三十八研究所 A kind of traffic scene character identification system and recognition methods based on Vehicular video
CN108363981A (en) * 2018-02-28 2018-08-03 北京奇艺世纪科技有限公司 A kind of title detection method and device
CN108694393A (en) * 2018-05-30 2018-10-23 深圳市思迪信息技术股份有限公司 A kind of certificate image text area extraction method based on depth convolution

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101276416A (en) * 2008-03-10 2008-10-01 北京航空航天大学 Text tracking and multi-frame reinforcing method in video
TW201039149A (en) * 2009-04-17 2010-11-01 Yu-Chieh Wu Robust algorithms for video text information extraction and question-answer retrieval
WO2015165524A1 (en) * 2014-04-30 2015-11-05 Longsand Limited Extracting text from video
CN104244073A (en) * 2014-09-26 2014-12-24 北京大学 Automatic detecting and recognizing method of scroll captions in videos
CN107545210A (en) * 2016-06-27 2018-01-05 北京新岸线网络技术有限公司 A kind of method of video text extraction
CN108052941A (en) * 2017-12-19 2018-05-18 北京奇艺世纪科技有限公司 A kind of news caption tracking and device
CN108229476A (en) * 2018-01-08 2018-06-29 北京奇艺世纪科技有限公司 Title area detection method and system
CN108256493A (en) * 2018-01-26 2018-07-06 中国电子科技集团公司第三十八研究所 A kind of traffic scene character identification system and recognition methods based on Vehicular video
CN108363981A (en) * 2018-02-28 2018-08-03 北京奇艺世纪科技有限公司 A kind of title detection method and device
CN108694393A (en) * 2018-05-30 2018-10-23 深圳市思迪信息技术股份有限公司 A kind of certificate image text area extraction method based on depth convolution

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LIANG WU等: "A New Technique for Multi-Oriented Scene Text Line Detection and Tracking in Video", 《IEEE TRANSACTIONS ON MULTIMEDIA》 *
XU-CHENG YIN等: "Text Detection, Tracking and Recognition in Video: A Comprehensive Survey", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
刁月华: "网络视频字幕提取识别系统的设计与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
密聪杰等: "基于多帧图像的视频文字跟踪和分割算法", 《计算机研究与发展》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021164479A1 (en) * 2020-02-21 2021-08-26 华为技术有限公司 Video text tracking method and electronic device
CN114463376A (en) * 2021-12-24 2022-05-10 北京达佳互联信息技术有限公司 Video character tracking method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109800757B (en) 2022-04-19

Similar Documents

Publication Publication Date Title
CN106096577B (en) A kind of target tracking method in camera distribution map
CN109522854B (en) Pedestrian traffic statistical method based on deep learning and multi-target tracking
CN106097391B (en) A kind of multi-object tracking method of the identification auxiliary based on deep neural network
CN109919981A (en) A kind of multi-object tracking method of the multiple features fusion based on Kalman filtering auxiliary
CN109816689A (en) A kind of motion target tracking method that multilayer convolution feature adaptively merges
CN110490901A (en) The pedestrian detection tracking of anti-attitudes vibration
CN109949340A (en) Target scale adaptive tracking method based on OpenCV
CN105940430B (en) Personnel's method of counting and its device
CN107424171A (en) A kind of anti-shelter target tracking based on piecemeal
CN108665485A (en) A kind of method for tracking target merged with twin convolutional network based on correlation filtering
CN107784663A (en) Correlation filtering tracking and device based on depth information
CN110390292A (en) Based on the remote sensing video frequency vehicle object detecting and tracking method for dynamically associating model
CN105654516B (en) Satellite image based on target conspicuousness is to ground weak moving target detection method
CN109974721A (en) A kind of vision winding detection method and device based on high-precision map
CN104992453A (en) Target tracking method under complicated background based on extreme learning machine
CN108830171A (en) A kind of Intelligent logistics warehouse guide line visible detection method based on deep learning
CN110490905A (en) A kind of method for tracking target based on YOLOv3 and DSST algorithm
CN107622507B (en) Air target tracking method based on deep learning
CN109800757A (en) A kind of video text method for tracing based on layout constraint
CN103853794B (en) Pedestrian retrieval method based on part association
CN110443829A (en) It is a kind of that track algorithm is blocked based on motion feature and the anti-of similarity feature
CN109816693A (en) Anti- based on multimodal response blocks correlation filtering tracking and systems/devices
CN109297489A (en) A kind of indoor navigation method based on user characteristics, electronic equipment and storage medium
CN109740609A (en) A kind of gauge detection method and device
CN106204633A (en) A kind of student trace method and apparatus based on computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Feng Xiaoyi

Inventor after: Song Zhendong

Inventor after: Wang Xihan

Inventor after: Jiang Xiaoyue

Inventor after: Xia Zhaoqiang

Inventor after: Peng Jinye

Inventor after: Xie Hongmei

Inventor after: Li Huifang

Inventor after: He Guiqing

Inventor before: Feng Xiaoyi

Inventor before: Wang Xihan

Inventor before: Jiang Xiaoyue

Inventor before: Xia Zhaoqiang

Inventor before: Peng Jinye

Inventor before: Xie Hongmei

Inventor before: Li Huifang

Inventor before: He Guiqing

Inventor before: Song Zhendong

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant