CN107679185A - A kind of Intelligent traffic video searching system - Google Patents

A kind of Intelligent traffic video searching system Download PDF

Info

Publication number
CN107679185A
CN107679185A CN201710912998.6A CN201710912998A CN107679185A CN 107679185 A CN107679185 A CN 107679185A CN 201710912998 A CN201710912998 A CN 201710912998A CN 107679185 A CN107679185 A CN 107679185A
Authority
CN
China
Prior art keywords
video
mrow
finger print
msub
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710912998.6A
Other languages
Chinese (zh)
Inventor
黄信文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shengda Machine Design Co Ltd
Original Assignee
Shenzhen Shengda Machine Design Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shengda Machine Design Co Ltd filed Critical Shenzhen Shengda Machine Design Co Ltd
Priority to CN201710912998.6A priority Critical patent/CN107679185A/en
Publication of CN107679185A publication Critical patent/CN107679185A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings

Abstract

The invention provides a kind of Intelligent traffic video searching system, including video monitoring equipment, the video module of imbedded geographic information, memory module, video frequency searching module and video playback module;The video monitoring equipment is used to carry out traffic conditions video record and obtains corresponding geography information;The video module of the imbedded geographic information is used to establish geography information and the corresponding relation for the video recorded, and obtains the video of imbedded geographic information;The memory module is used for the video for storing imbedded geographic information;The video frequency searching module is used to retrieve the video for being embedded with geography information;The video playback module is used to play out the video for being embedded with geography information retrieved.Beneficial effects of the present invention are:The intelligent retrieval of traffic video is realized, improves traffic monitoring level.

Description

A kind of Intelligent traffic video searching system
Technical field
The present invention relates to technical field of intelligent traffic, and in particular to a kind of Intelligent traffic video searching system.
Background technology
With the development of computer technology, traffic video data volume is increasing, existing traffic video retrieval system without Method is effectively retrieved to traffic video.
The content of the invention
A kind of in view of the above-mentioned problems, the present invention is intended to provide Intelligent traffic video searching system.
The purpose of the present invention is realized using following technical scheme:
Provide a kind of Intelligent traffic video searching system, including video monitoring equipment, the video screen module of imbedded geographic information Block, memory module, video frequency searching module and video playback module;
The video monitoring equipment is used to carry out traffic conditions video record and obtains corresponding geography information;
The video module of the imbedded geographic information is used to establish geography information and the corresponding relation for the video recorded, and obtains The video of imbedded geographic information;
The memory module is used for the video for storing imbedded geographic information;
The video frequency searching module is used to retrieve the video for being embedded with geography information;
The video playback module is used to play out the video for being embedded with geography information retrieved.
Beneficial effects of the present invention are:The intelligent retrieval of traffic video is realized, improves traffic monitoring level.
Brief description of the drawings
Using accompanying drawing, the invention will be further described, but the embodiment in accompanying drawing does not form any limit to the present invention System, for one of ordinary skill in the art, on the premise of not paying creative work, can also be obtained according to the following drawings Other accompanying drawings.
Fig. 1 is the structural representation of the present invention;
Reference:
Video monitoring equipment 1, the video module 2 of imbedded geographic information, memory module 3, video frequency searching module 4, video return Amplification module 5.
Embodiment
The invention will be further described with the following Examples.
Referring to Fig. 1, a kind of Intelligent traffic video searching system of the present embodiment, including video monitoring equipment 1, imbedded geographic Video module 2, memory module 3, video frequency searching module 4 and the video playback module 5 of information;
The video monitoring equipment 1 is used to carry out traffic conditions video record and obtains corresponding geography information;
The video module 2 of the imbedded geographic information is used to establish geography information and the corresponding relation for the video recorded, and obtains To the video of imbedded geographic information;
The memory module 3 is used for the video for storing imbedded geographic information;
The video frequency searching module 4 is used to retrieve the video for being embedded with geography information;
The video playback module 5 is used to play out the video for being embedded with geography information retrieved.
The present embodiment realizes the intelligent retrieval of traffic video, improves traffic monitoring level.
Preferably, the video monitoring equipment 1 includes camera and GPS chip, and the camera regards for obtaining traffic Frequently, the GPS chip is used for the geography information for obtaining corresponding video.
This preferred embodiment GPS chip can obtain global geographical information, and the geography information of acquisition is more accurate.
Preferably, the camera is high-definition camera.
This preferred embodiment obtains relatively sharp traffic video.
Preferably, the video frequency searching module 4 includes the first video finger print extracting sub-module, the second video finger print matching Module and the 3rd fingerprint performance evaluation submodule, the first video finger print extracting sub-module is used for the feature for extracting video, raw Into video finger print, the second video finger print matched sub-block be used for according to video finger print compare two videos whether content one Cause, obtain and inquire about the video that video matches;The 3rd fingerprint performance evaluation submodule is used for the second video finger print The performance of sub-module is evaluated.
The first video finger print extracting sub-module includes the first video decoding unit, second feature extraction unit, the 3rd Fingerprint modeling unit, first video decoding unit are used to decode original video sequence, obtain YUV sequences, described Second feature extraction unit is used for the feature according to YUV sequential extraction procedures videos, and the 3rd fingerprint modeling unit is used for basis and carried The video features taken establish Fingerprint Model, obtain video finger print;
The second feature extraction unit includes fisrt feature extraction subelement and the second frame per second conversion subunit, and described the One feature extraction subelement is used for the feature for extracting video under original frame per second, and the second frame per second conversion subunit is used for will be original The video of frame per second is converted into fixed frame per second;
The fisrt feature extraction subelement is used for the feature for extracting video under original frame per second, is specially:A, from YUV sequences Middle extraction monochrome information Y, forms new video sequence;B, assume that video pixel is M × N, then the geometric center of every width frame of video For (M/2, N/2), using the geometric center of frame of video as origin of coordinates O, fk(x, y) is position of the kth width frame of video using O as origin Put the brightness value at (x, y) place, brightness value fkThe span of (x, y) is [0,255], according to brightness value fk(x, y) calculates every width and regarded Eigencenter (the EH of frequency framexk,EHyk):
C, feature based center, eigencenter angle is calculated:In above-mentioned formula, βkRepresent kth width The eigencenter angle of frame of video;The eigencenter angle of all frame of video of whole video sequence is calculated, with all eigencenter angles structure Build one-dimensional characteristic vector β:β=[β12,…,βK], in above-mentioned formula, K represents the number of video frames that video sequence includes;
This preferred embodiment fisrt feature extracts subelement by extracting the monochrome information of video and establishing the spy of frame of video Sign center, effective extraction of video features is realized, by establishing the eigencenter angle of frame of video, conveniently can intuitively represented The feature of video.
Preferably, the second frame per second conversion subunit is used to the video of original frame per second being converted into fixed frame per second, specifically For:A, original video sequence frame per second is set as Q, and the fixation frame per second after conversion is P, and frame per second is the eigencenter angle θ of the i-th frame under Pi By the eigencenter angle beta that frame per second is continuous two frames under QkAnd βk+1It is converted to, conversion formula is:θi=(1- μ2k2 βk+1, wherein,B, one-dimensional characteristic is built with the eigencenter angle of all frame of video after conversion Vectorial θ:θ=[θ12,…,θM], in above-mentioned formula, M is the number of the frame of video that video bag contains under frame per second P, and characteristic vector θ is For the video features of extraction;
This preferred embodiment the second frame per second conversion subunit is first extracted to video features, then directly using linearisation Method obtained eigencenter angle is changed, reduce amount of calculation, improve conversion efficiency, carried so as to improve feature Take speed.
Preferably, the 3rd fingerprint modeling unit is used to establish Fingerprint Model according to the video features of extraction, is specially: A, eigencenter relative angle γ is calculated in the following waysi, γii+2i+1i;B, in the feature for calculating whole video sequence Heart relative angle, video finger print model is established according to eigencenter relative angle:Video finger print is γ=[γ12,…,γM-2];
The fingerprint modeling unit of this preferred embodiment the 3rd is handled characteristic information, is established video finger print, is contributed to It is follow-up to carry out fingerprint matching.
Preferably, the second video finger print matched sub-block includes the first matching unit, the second matching unit and synthesis Matching unit, first matching unit are used to calculate the first matching value between video finger print, and second matching unit is used The second matching value between video finger print is calculated, the comprehensive matching unit are used for according to the first matching value and the second matching value Determine video matching degree;
First matching value is determined using following formula:
In above-mentioned formula,The first matching value between video finger print is represented, Represent the video finger print of inquiry, ω=[ω12,…,ωM-2] represent video database in any video finger print;
Second matching value is determined using following formula:
In above-mentioned formula,Represent the second matching value between video finger print;
The determination video matching degree is carried out by matching attribute, and the matching attribute is determined using following formula:
In above-mentioned formula,Represent the matching attribute between video;If matching attribute is less than given threshold, recognize For two video matchings, otherwise, video mismatches, and continues in video data library searching;
This preferred embodiment the second video finger print matched sub-block passes through the first matching value and second between video finger print Matching value determines that the matching attribute of video matches to video, and matching result is more accurate.
Preferably, the 3rd fingerprint performance evaluation submodule is used to enter the performance of the second video finger print matched sub-block Row evaluation, is carried out especially by evaluation points, and the evaluation points are determined using following formula:
In above-mentioned formula, FS represents the value of evaluation points, LG1Represent the video consistent with inquiry video content inquired Quantity, LG2Represent number of videos consistent with inquiry video content in video database, LG3Expression inquires and inquiry video The inconsistent number of videos of content, LG4Represent in video database and the inconsistent number of videos of inquiry video content, evaluation because Son is bigger, and the second video finger print matched sub-block performance is better.
The fingerprint performance evaluation submodule of this preferred embodiment the 3rd passes through the matching to the second video finger print matched sub-block As a result evaluated, ensure that the performance of the second video finger print matched sub-block.
Traffic video retrieval is carried out using Intelligent traffic video searching system of the present invention, 5 traffic videos is chosen and is examined Rope, respectively traffic video 1, traffic video 2, traffic video 3, traffic video 4, traffic video 5, it is accurate to recall precision and retrieval True property is counted, and is compared compared with traffic video retrieval system, caused to have the beneficial effect that shown in table:
Finally it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than the present invention is protected The limitation of scope is protected, although being explained with reference to preferred embodiment to the present invention, one of ordinary skill in the art should Work as understanding, technical scheme can be modified or equivalent substitution, without departing from the reality of technical solution of the present invention Matter and scope.

Claims (9)

  1. A kind of 1. Intelligent traffic video searching system, it is characterised in that the video including video monitoring equipment, imbedded geographic information Module, memory module, video frequency searching module and video playback module;
    The video monitoring equipment is used to carry out traffic conditions video record and obtains corresponding geography information;
    The video module of the imbedded geographic information is used to establish geography information and the corresponding relation for the video recorded, and is embedded The video of geography information;
    The memory module is used for the video for storing imbedded geographic information;
    The video frequency searching module is used to retrieve the video for being embedded with geography information;
    The video playback module is used to play out the video for being embedded with geography information retrieved.
  2. 2. Intelligent traffic video searching system according to claim 1, it is characterised in that the video monitoring equipment includes Camera and GPS chip, the camera are used to obtain traffic video, and the GPS chip is used for the geography for obtaining corresponding video Information.
  3. 3. Intelligent traffic video searching system according to claim 2, it is characterised in that the camera is high-definition camera Head.
  4. 4. Intelligent traffic video searching system according to claim 3, it is characterised in that the video frequency searching module includes First video finger print extracting sub-module, the second video finger print matched sub-block and the 3rd fingerprint performance evaluation submodule, described One video finger print extracting sub-module is used for the feature for extracting video, generates video finger print, and second video finger print matches submodule Whether content is consistent for comparing two videos according to video finger print for block, obtains and inquire about the video that video matches;Described Three fingerprint performance evaluation submodules are used to evaluate the performance of the second video finger print matched sub-block.
  5. 5. Intelligent traffic video searching system according to claim 4, it is characterised in that the first video finger print extraction Submodule includes the first video decoding unit, second feature extraction unit, the 3rd fingerprint modeling unit, the first video decoding Unit is used to decode original video sequence, obtains YUV sequences, and the second feature extraction unit is used for according to YUV sequences The feature of row extraction video, the 3rd fingerprint modeling unit are used to establish Fingerprint Model according to the video features of extraction, obtained Video finger print;
    The second feature extraction unit includes fisrt feature extraction subelement and the second frame per second conversion subunit, and described first is special Sign extraction subelement is used for the feature for extracting video under original frame per second, and the second frame per second conversion subunit is used for original frame per second Video be converted into fixed frame per second;
    The fisrt feature extraction subelement is used for the feature for extracting video under original frame per second, is specially:A, carried from YUV sequences Monochrome information Y is taken, forms new video sequence;B, assume that video pixel is M × N, then the geometric center of every width frame of video is (M/ 2, N/2), using the geometric center of frame of video as origin of coordinates O, fk(x, y) be kth width frame of video using O as origin position (x, Y) brightness value at place, brightness value fkThe span of (x, y) is [0,255], according to brightness value fk(x, y) calculates every width frame of video Eigencenter (EHxk,EHyk):
    <mrow> <msub> <mi>EH</mi> <mrow> <mi>x</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>&amp;Sigma;</mi> <mi>x</mi> <mo>&amp;times;</mo> <msub> <mi>f</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>&amp;Sigma;f</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mn>2</mn> </mrow> </msqrt> <mo>+</mo> <mfrac> <mrow> <mi>&amp;Sigma;</mi> <mi>x</mi> <mo>&amp;times;</mo> <msub> <mi>f</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mn>20</mn> <msub> <mi>&amp;Sigma;f</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow>
    <mrow> <msub> <mi>EH</mi> <mrow> <mi>y</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>&amp;Sigma;</mi> <mi>y</mi> <mo>&amp;times;</mo> <msub> <mi>f</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>&amp;Sigma;f</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mn>2</mn> </mrow> </msqrt> <mo>+</mo> <mfrac> <mrow> <mi>&amp;Sigma;</mi> <mi>y</mi> <mo>&amp;times;</mo> <msub> <mi>f</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mn>20</mn> <msub> <mi>&amp;Sigma;f</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow>
    C, feature based center, eigencenter angle is calculated:In above-mentioned formula, βkRepresent kth width video The eigencenter angle of frame;The eigencenter angle of all frame of video of whole video sequence is calculated, one is built with all eigencenter angles Dimensional feature vector β:β=[β12,…,βK], in above-mentioned formula, K represents the number of video frames that video sequence includes.
  6. 6. Intelligent traffic video searching system according to claim 5, it is characterised in that the second frame per second conversion is single Member is used to the video of original frame per second being converted into fixed frame per second, is specially:A, original video sequence frame per second is set as Q, after conversion Fixed frame per second is P, and frame per second is the eigencenter angle θ of the i-th frame under PiBy the eigencenter angle that frame per second is continuous two frames under Q βkAnd βk+1It is converted to, conversion formula is:θi=(1- μ2k2βk+1, wherein,B, with turn The eigencenter angle structure one-dimensional characteristic vector θ of all frame of video after changing:θ=[θ12,…,θM], in above-mentioned formula, M is The number for the frame of video that video bag contains under frame per second P, characteristic vector θ are the video features extracted.
  7. 7. Intelligent traffic video searching system according to claim 6, it is characterised in that the 3rd fingerprint modeling unit Fingerprint Model is established for the video features according to extraction, is specially:A, eigencenter relative angle is calculated in the following ways γi, γii+2i+1i;B, the eigencenter relative angle of whole video sequence is calculated, is established according to eigencenter relative angle Video finger print model:Video finger print is γ=[γ12,…,γM-2]。
  8. 8. Intelligent traffic video searching system according to claim 7, it is characterised in that the second video finger print matching Submodule includes the first matching unit, the second matching unit and comprehensive matching unit, and first matching unit regards for calculating The first matching value between frequency fingerprint, second matching unit is used to calculate the second matching value between video finger print, described Comprehensive matching unit is used to determine video matching degree according to the first matching value and the second matching value;
    First matching value is determined using following formula:
    In above-mentioned formula,The first matching value between video finger print is represented,Expression is looked into The video finger print of inquiry, ω=[ω12,…,ωM-2] represent video database in any video finger print;
    Second matching value is determined using following formula:
    In above-mentioned formula,Represent the second matching value between video finger print;
    The determination video matching degree is carried out by matching attribute, and the matching attribute is determined using following formula:
    In above-mentioned formula,Represent the matching attribute between video;If matching attribute is less than given threshold, then it is assumed that two Video matching, otherwise, video mismatch, and continue in video data library searching.
  9. 9. Intelligent traffic video searching system according to claim 8, it is characterised in that the 3rd fingerprint performance evaluation Submodule is used to evaluate the performance of the second video finger print matched sub-block, is carried out especially by evaluation points, institute's commentary The valency factor is determined using following formula:
    In above-mentioned formula, FS represents the value of evaluation points, LG1The number of videos consistent with inquiry video content inquired is represented, LG2Represent number of videos consistent with inquiry video content in video database, LG3Expression inquires and inquiry video content Inconsistent number of videos, LG4The number of videos that in video database and inquiry video content is inconsistent is represented, evaluation points are got over Greatly, the second video finger print matched sub-block performance is better.
CN201710912998.6A 2017-09-30 2017-09-30 A kind of Intelligent traffic video searching system Pending CN107679185A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710912998.6A CN107679185A (en) 2017-09-30 2017-09-30 A kind of Intelligent traffic video searching system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710912998.6A CN107679185A (en) 2017-09-30 2017-09-30 A kind of Intelligent traffic video searching system

Publications (1)

Publication Number Publication Date
CN107679185A true CN107679185A (en) 2018-02-09

Family

ID=61138312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710912998.6A Pending CN107679185A (en) 2017-09-30 2017-09-30 A kind of Intelligent traffic video searching system

Country Status (1)

Country Link
CN (1) CN107679185A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289520A (en) * 2011-09-15 2011-12-21 山西四和交通工程有限责任公司 Traffic video retrieval system and realization method thereof
CN104809248A (en) * 2015-05-18 2015-07-29 成都索贝数码科技股份有限公司 Video fingerprint extraction and retrieval method
CN107066581A (en) * 2017-04-14 2017-08-18 北京邮电大学 Distributed traffic monitor video data storage and quick retrieval system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289520A (en) * 2011-09-15 2011-12-21 山西四和交通工程有限责任公司 Traffic video retrieval system and realization method thereof
CN104809248A (en) * 2015-05-18 2015-07-29 成都索贝数码科技股份有限公司 Video fingerprint extraction and retrieval method
CN107066581A (en) * 2017-04-14 2017-08-18 北京邮电大学 Distributed traffic monitor video data storage and quick retrieval system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王大永: "感知视频指纹算法研究", 《中国博士学位论文全文数据库信息科技辑》 *

Similar Documents

Publication Publication Date Title
CN103530881B (en) Be applicable to the Outdoor Augmented Reality no marks point Tracing Registration method of mobile terminal
CN104572886B (en) The financial time series similarity query method represented based on K line charts
CN108241712A (en) A kind of map data processing method and device
CN102436465B (en) Telemetry data compression storage and rapid query method of ontrack spacecraft
CN106649663A (en) Video copy detection method based on compact video representation
CN106528597A (en) POI (Point Of Interest) labeling method and device
CN106897295B (en) Hadoop-based power transmission line monitoring video distributed retrieval method
CN105825191A (en) Face multi-attribute information-based gender recognition method and system and shooting terminal
CN103631932A (en) Method for detecting repeated video
CN102890700A (en) Method for retrieving similar video clips based on sports competition videos
CN103927535B (en) A kind of Chinese-character writing recognition methods and device
CN106503223A (en) A kind of binding site and the online source of houses searching method and device of key word information
CN109522434A (en) Social image geographic positioning and system based on deep learning image retrieval
CN104063701B (en) Fast electric television stations TV station symbol recognition system and its implementation based on SURF words trees and template matches
Zhang et al. Topological spatial verification for instance search
CN108492711A (en) A kind of drawing electronic map method and device
CN112364201A (en) Video data retrieval method and system
CN104778238A (en) Video saliency analysis method and video saliency analysis device
CN110267101A (en) A kind of unmanned plane video based on quick three-dimensional picture mosaic takes out frame method automatically
CN103678657B (en) Method for storing and reading altitude data of terrain
CN103970901A (en) Geographic information graphic data integration method
CN107133260A (en) The matching and recognition method and device of a kind of landmark image
CN103605652A (en) Video retrieval and browsing method and device based on object zone bits
CN108121806A (en) One kind is based on the matched image search method of local feature and system
CN101515286A (en) Image matching method based on image feature multi-level filtration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180209

RJ01 Rejection of invention patent application after publication